How to fill kernel entropy without X and hardware RNG? - random

I have a tiny embedded device running Linux but with no hardware RNG driver and without X server (no mouse, no keyboard...).
/dev/random
blocks very quickly.
cat /proc/sys/kernel/random/entropy_avail
reports very low numbers (~10).
The system handles a camera so there is a real source of entropy. How can I input entropy into the kernel?

Take a data stream from your camera, hash it using something decent like BLAKE2b or SHA2, then feed it into /dev/random.
Once the entropy count is >=256 you are good to go.
From then only read from /dev/urandom/.
/dev/urandom will happily spew out cryptographically secure pseudorandom data suitable for key material once the system has 256 bits of entropy available.
Running out of entropy after you've collected this amount is a myth. Use /dev/urandom, really, it's perfectly fine.

You should take a try with haveged.
It comes with most distributions, you can also install it easily on custom distributions.
It's a userspace daemon that is meant to solve your problem.
cf. man page here: https://linux.die.net/man/8/haveged

Related

Which library/code is responsible for rendering the terminal in retro computers?

For example as you type, which library is telling the computer screen to display the respective ascii character and to move the cursor accordingly?
Imagine something like the old school computers (with no GUI) running DOS or Basic... what/which library is responsible for the UI?
Links to source code would be great for understanding how said library(ies) works.
The photo you have posted is of a BBC Micro running in Mode 7. This was an exception to most rules. Mode 7 was a low memory mode, in which there where no pixels, just 256 text characters. 1K of memory was reserved in RAM to contain what was displayed on the screen at that moment. A special chip on the circuit board, called the Video ULA (Uncommited Logic Array) read the contents of that memory and coded it to the output. The ULA was ROM and could not be changed by the programmer.
The ZX81 worked in a similar way: 256 possible text characters and no pixels. However the ZX81 had less dedicated chips and the main CPU did most of the work.
A more common setup was that every pixel was represented by a number of bits in memory (often more than one bit per pixel was needed because colours had to be indicated). Examples are BBC in modes 1-6; the Acorn Electron; Spectrum; C64; also many others. When the user placed text on the screen, the computers ROM would convert this to the correct pixels. Graphics could often be written directly to the RAM, or 'plotted' via BASIC. Once again, dedicated ROM chips and circuitry would then render this memory to the output. This approach required much more memory to display.
Every 8 bit computer had its own way of representing the display in RAM. You need to get manuals of the machine you are trying to program (easy to find on internet for the better known Micros).
Many emulators are open source, if you want to see the internals. For example: https://github.com/stardot/beebem
If you're interested in seeing the internals of a terminal to better understand how it works and renders input/output, Bash is completely open source. You can download its latest source code here.

differences between random and urandom

I'm trying to find out the differences between /dev/random and /dev/urandom files
What are the differences between /dev/random and /dev/urandom?
When should I use them?
when should I not use them?
Using /dev/random may require waiting for the result as it uses so-called entropy pool, where random data may not be available at the moment.
/dev/urandom returns as many bytes as user requested and thus it is less random than /dev/random.
As can be read from the man page:
random
When read, the /dev/random device will only return random bytes within
the estimated number of bits of noise in the entropy pool. /dev/random
should be suitable for uses that need very high quality randomness
such as one-time pad or key generation. When the entropy pool is
empty, reads from /dev/random will block until additional
environmental noise is gathered.
urandom
A read from the /dev/urandom device will not block waiting for more
entropy. As a result, if there is not sufficient entropy in the
entropy pool, the returned values are theoretically vulnerable to a
cryptographic attack on the algorithms used by the driver. Knowledge
of how to do this is not available in the current unclassified
literature, but it is theoretically possible that such an attack may
exist. If this is a concern in your application, use /dev/random
instead.
For cryptographic purposes you should really use /dev/random because of nature of data it returns. Possible waiting should be considered as acceptable tradeoff for the sake of security, IMO.
When you need random data fast, you should use /dev/urandom of course.
Source: Wikipedia page, man page
Always use /dev/urandom.
/dev/urandom and /dev/random use the same random number generator. They both are seeded by the same entropy pool. They both will give an equally random number of an arbitrary size. They both can give an infinite amount of random numbers with only a 256 bit seed. As long as the initial seed has 256 bits of entropy, you can have an infinite supply of arbitrarily long random numbers. You gain nothing from using /dev/random. The fact that there's two devices is a flaw in the Linux API.
If you are concerned about entropy, using /dev/random is not going to fix that. But it will slow down your application while not generating numbers anymore random than /dev/urandom. And if you aren't concerned about entropy, why are you using /dev/random at all?
Here's a much better/indepth explanation on why you should always use /dev/urandom: http://www.2uo.de/myths-about-urandom/
The kernel developers are discussing removing /dev/random: https://lwn.net/SubscriberLink/808575/9fd4fea3d86086f0/
What are the differences between /dev/random and /dev/urandom?
/dev/random and /dev/urandom are interfaces to the kernel's random number generator:
Reading returns a stream of random bytes strong enough for use in cryptography
Writing to them will provide the kernel data to update the entropy pool
When it comes to the differences, it depends on the operation system:
On Linux, reading from /dev/random may block, which limits its use in practice considerably
On FreeBSD, there is none. /dev/urandom is just a symbolic link to /dev/random.
When should I use them?
When should I not use them?
It is very difficult to find a use case where you should use /dev/random over /dev/urandom.
Danger of blocking:
This is a real problem that you will have to face when you decide to use /dev/random. For single usages like ssh-keygen it should be OK to wait for some seconds, but for most other situations it will be not an option.
If you use /dev/random, you should open it in nonblocking mode and provide some sort of user notification if the desired entropy is not immediately available.
Security:
On FreeBSD, there is no difference anyway, but also in Linux /dev/urandom is considered secure for almost all practical cases (e.g, Is a rand from /dev/urandom secure for a login key? and Myths about /dev/urandom).
The situations where it could make a difference are edge cases like a fresh Linux installation. To cite from the Linux man page:
The /dev/random interface is considered a legacy interface, and /dev/urandom is preferred and sufficient in
all use cases, with the exception of applications which require randomness during early boot time; for
these applications, getrandom(2) must be used instead, because it will block until the entropy pool is initialized.
If a seed file is saved across reboots as recommended below (all major Linux distributions have done this
since 2000 at least), the output is cryptographically secure against attackers without local root access as
soon as it is reloaded in the boot sequence, and perfectly adequate for network encryption session keys.
Since reads from /dev/random may block, users will usually want to open it in nonblocking mode (or perform
a read with timeout), and provide some sort of user notification if the desired entropy is not immediately available.
Recommendation
As a general rule, /dev/urandomshould be used for everything except long-lived GPG/SSL/SSH keys.
Short answer
Use /dev/urandom
Long Answer
They are both fed by the same cryptographically secure pseudorandom number generator (CSPRNG). The fact that /dev/random waits for entropy (or more specifically, waits for the system's estimation of its entropy to reach an appropriate level) only makes a difference when you are using a information-theoretically secure algorithm, as opposed to a computationally secure algorithm. The former encompasses algorithms that you probably aren't using, such as Shamir's Secret Sharing and the One-time pad. The latter contains algorithms that you actually use and care about, such as AES, RSA, Diffie-Hellman, OpenSSL, GnuTLS, etc.
So it doesn't matter if you use numbers from /dev/random since they're getting pumped out of a CSPRNG anyway, and it is "theoretically possible" to break the algorithms that you're likely using them with anyway.
Lastly, that "theoretically possible" bit means just that. In this case, that means using all of the computing power in the world, for the amount of time that that the universe has existed to crack the application.
Therefore, there is pretty much no point in using /dev/random
So use /dev/urandom
Sources
1
2
3

What is entropy starvation

I was lost when reading
"Knowing how Linux behaves during entropy starvation (and being able to find the cause) allows us to efficiently use our server hardware."
in a blog. Then I wikied the meaning of 'entropy' in the context of linux. But still, not clear what "entropy starvation' is and the meaning of the sentence quoted above.
Some applications, notably cryptography, need random data. In cryptography, it is very important that the data be truly random, or at least unpredictable (even in part) to any attacker.
To supply this data, a system keeps a pool of random data, called entropy, that it collects from various sources of randomness on the system: Precise timing of events that might be somewhat random (keys pressed by users, interrupts from external devices), noise on a microphone, or, on some processors, dedicated hardware for generating random values. The incoming somewhat-random data is mixed together to produce better quality entropy.
These sources of randomness can only supply data at certain rates. If a system is used to do a lot of work that needs random data, it can use up more random data than is available. Then software that wants random data has to wait for more to be generated or it has to accept lower-quality data. This is called entropy starvation or entropy depletion.

random number generator dev/random

I read that the random number generator dev/random on Mac and Solaris includes 160 bits of entropy. What can I do, if I need more entropy, for example, 200 bits? Thanks in advance
I'm not sure where you read that 160-bit estimate -- I believe that Solaris, Mac and most BSDs use a 256-bit Yarrow implementation. At any rate, the entropy pool is regularly refilled from even the smallest amount of network or disk activity, so, even though /dev/random on non-Linux systems doesn't actually block "waiting for more entropy" (it's more like a supposedly higher-quality version of /dev/urandom, to which on these systems it's typically linked), nothing stops you (if you trust, say, no more than 160 bits at a time from the /dev) from "blocking and refreshing entropy" yourself -- get N bits, do some disk or network I/O, get another N bits, and so forth.
And if you think your disk access is too predictable, you could go for some really bizzare sources like, say, a few of the most recent twitter entries if your program has internet access;)

Arduino: Lightweight compression algorithm to store data in EEPROM

I want to store a large amount of data onto my Arduino with a ATmega168/ATmega328 microcontroller, but unfortunately there's only 256 KB / 512 KB of EEPROM storage.
My idea is to make use of an compression algorithm to strip down the size. But well, my knowledge on compression algorithms is quite low and my search for ready-to-use libraries failed.
So, is there a good way to optimize the storage size?
You might have a look at the LZO algorithm, which is designed to be lightweight. I don't know whether there are any implementations for the AVR system, but it might be something you could implement yourself.
You may be somewhat misinformed about the amount of storage available in EEPROM on your chip though; according to the datasheet I have the EEPROM sizes are:
ATmega48P: 256
ATmega88P: 512
ATmega168P: 512
ATmega256P: 1024
Note that those values are in bytes, not KB as you mention in your question. This is not, by any measure, a "shitload".
AVRs only have a few kilobytes of EEPROM at the most, and very few have many more than 64K Flash (no standard Arduinos do).
If you are needing to store something and seldom modify, for instance an image, you could try using the Flash as there is much more space there to work with. For simple images, some crude RLE encoding would go a long way.
Compressing anything more random, for instance logged data, audio, etc, will take a tremendous amount of overhead for the AVR, you will have better luck getting a serial EEPROM chip to hold this data. Arduino's site has a page on interfacing with a 64K chip, which sounds . If you want more than that, look at interfacing with a SD card with SPI, for instance in this audio shield
A NASA study here (Postscript)
A repost of 1989 article on LZW here
Keep it simple and perform analysis of the cost/payout of adding compression. This includes time and effort, complexity, resource usage, data compressibility, etc.
An algorithm something like LZSS would probably be a good choice for an embedded platform. They are simple algorithms, and don't need much memory.
LZS is one I'm familiar with. It uses a 2 kB dictionary for compression and decompression (the dictionary is the most recent 2 kB of the uncompressed data stream). (LZS was patented by HiFn, however as far as I can tell, all patents have expired.)
But I see that an ATmega328, used on recent Arduinos, only has 512 bytes to 2 kB SRAM, so maybe even LZS is too big for it. I'm sure you could use a variant with a smaller dictionary, but I'm not sure what compression ratios you'd achieve.
The method described in the paper “Data Compression Algorithms for Energy-Constrained Devices in Delay Tolerant Networks” might run on an ATmega328.
Reference: C. Sadler and M. Martonosi, “Data Compression Algorithms for Energy-Constrained Devices in Delay Tolerant Networks,” Proceedings of the ACM Conference on Embedded Networked Sensor Systems (SenSys) 2006, November 2006. .pdf.
S-LZW Source for MSPGCC: slzw.tar.gz. Updated 10 March 2007.
You might also want to take a look at LZJB, being very short, simple, and lightweight.
Also, FastLZ might be worth a look. It gets better compression ratios than LZJB and has pretty minimal memory requirements for decompression:
If you just want to remove some repeating zero's or such, use Run-length encoding
Repeating byte sequences will be stored as:
<mark><byte><count>
It's super-simple algorithm, which you can probably code yourself in few lines of code.
Is an external EEPROM (for example via I2C) not an option? Even if you use a compression algorithm the down side is that the size of data you may store in the internal EEPROM may not be determined in a simple way any more..
And of corse, if you really mean kBYTES, then consider a SDCard connected to the SPI... There are some light weighted open source FAT-compatible file systems in the net.
heatshrink is a data compression/decompression library for embedded/real-time systems based on LZSS. It says it can run in under 100 bytes of memory.

Resources