I'm wondering how does one go about reversing an algorithm such as one for storing logins or pin codes.
Lets say I have an amount of data where:
7262627 -> ? -> 8172
5353773 -> ? -> 1132
etc. This is just an example. Or say a hex string that is tansformed into another.
&h8712 -> &h1283 or something like that.
How do I go about starting to figure out what that algorithm is? Where does one start?
Would you start trying different shifts, xors and hope something stands out? I'm sure there's a better way as this seems like stabbing in the dark.
Is it even practically possible to reverse engineer this kind of algorithm?
Sorry if this is a stupid question. Thanks for your help / pointers.
There are a few things people try:
Get the source code, or disassemble an executable.
Guess, based on the hash functions other people use. For example, a hash consisting of 32 hex digits might well be one or more repetitions of MD5, and if you can get a single input/output pair then it is quite easy to confirm or refute this (although see "salt", below).
Statistically analyze a large number of pairs of inputs and outputs, looking for any kind of pattern or correlations, and relate those correlations to properties of known hash functions and/or possible operations that the designer of the system might have used. This is beyond the scope of a single technique, and into the realms of general cryptanalysis.
Ask the author. Secure systems don't usually rely on the secrecy of the hash algorithms they use (and don't usually stay secure long if they do). The examples you give are quite small, though, and secure hashing of passwords would always involve a salt, which yours apparently don't. So we might not be talking about the kind of system where the author is confident to do that.
In the case of a hash where the output is only 4 decimal digits, you can attack it simply by building a table of every possible 7 digit input, together with its hashed value. You can then reverse the table and you have your (one-to-many) de-hashing operation. You never need to know how the hash is actually calculated. How do you get the input/output pairs? Well, if an outsider can somehow specify a value to be hashed, and see the result, then you have what's called a "chosen plaintext", and an attack relying on that is a "chosen plaintext attack". So a 7 digit -> 4 digit hash would be very weak indeed if it was used in a way which allowed chosen plaintext attacks to generate a lot of input/output pairs. I realise that's just one example, but it's also just one example of a technique to reverse it.
Note that reverse engineering the hash, and actually reversing it, are two different things. You could figure out that I'm using SHA-256, but that wouldn't help you reverse it (i.e., given an output, work out the input value). Nobody knows how to fully reverse SHA-256, although of course there are always rainbow tables (see "salt", above) <conspiracy>At least nobody admits they do, so it's no use to you or me.</conspiracy>
Probably, you can't. Suppose the transformation function is known, something like
function hash(text):
return sha1("secret salt"+text)
But the "secret salt" is not known, and is cryptographically strong (a very large, random integer). You could never brute force the secret salt from even a very large number of plain-text, crypttext pairs.
In fact, if the precise hash function used was known to be one of two equally strong functions, you could never even get a good guess between which one was being used.
Stabbing in the dark will drive you to insanity. There are some algorithms that, given current understanding, you couldn't hope to deduce the inner workings of between now and the [predicted] end of the universe without knowing the exact details (potentially including private keys or internal state). Of course, some of these algorithms are the foundations of modern cryptography.
If you know in advance that there's a pattern to be discovered though, there are sometimes ways of approaching this. For instance, if the dataset contains several input values that differ by 1, compare the corresponding output values:
7262627 -> 8172
7262628 -> 819
7262629 -> 1732
...
7262631 -> 3558
Here it's fairly clear (given a few minutes and a calculator) that when the input increases by 1, the output increases by 913 modulo 8266 (i.e. a simple linear congruential generator).
Differential cryptanalysis is a relatively modern technique used to analyse the strength of cryptographic block ciphers, relying on a similar but more complex idea for where the cipher algorithm is known, but it's assumed the private key isn't. Input blocks differing from each other by a single bit are considered and the effect of that bit is traced through the cipher to deduce how likely each output bit is to "flip" as a result.
Other ways of approaching this kind of problem would be to look at the extremes (maximum, minimum values), distribution (leading to frequency analysis), direction (do the numbers always increase? decrease?) and (if this is allowed) consider the context in which the data sets were found. For instance, some types of PIN codes always contain a repeated digit to make them easier to remember (I'm not saying a PIN code can necessarily be deduced from anything else - just that a repeated digit is one less digit to worry about!).
Is it even practically possible to reverse engineer this kind of algorithm?
It is possible with a flawed algorithm and enough encrypted/unencrypted pairs, but a well designed algorithm can eliminate that possibility of doing it at all.
Related
So I've read the Wikipedia page on Hash functions as I'm currently playing with some.
Both on that page and other sources I've read mention that the distribution of the data affects the hash function.
Despite some explanations it is still unclear to me what exactly those effects are and perhaps why. So my question:
Just to make sure I've got it right, when they mention
distribution is this the frequency of each word in the input data
set?
What effect does the distribution of input data have on hash
functions? Of particular interest is, the performance of the hash
function, in terms of both speed and uniformity of the output produced by the hash algorithm.
EDIT 1:
I'm thinking specifically of the Wikipedia English corpus vs data from a more dynamic source, Twitter's tweets for example.
Usually you do not have as many input datasets as you have possible inputs. The distribution is therefore more of a propability, that a certain input with certain features will be picked. (essentially the same as you said, but with p<1 for every word instead of some count n>1) E.g. if you know, that the first bit of the input will always be 1, then the data is not uniformly distributed.
If your hash were very simple, eg. by only taking the first byte as 'hash', then this non-uniform distribution would lead to more collisions than anticipated. (only 128 values are possible even though you expected to get 256 different values)
Most (cryptographic) hash functions that you might know by name are good enough so that you do not have to care about this. For cryptography it is even an explicit condition: you must not be able to tell how many bits in the input changed just by looking at the difference of the hashes. That does not mean that it is impossible though. I can vaguely remember a paper stating an increased collision rate for md5 when only ascii letters and digits were hashed. I cannot find it right now, so enjoy this piece of information with care - but even if i have mixed up something, such a scenario is easily possible. And no matter whether it is md5 or some other algorithm, if you actually have such a relation, then certainly your distribution of input datasets is relevant again.
A coworker was recently asked this when trying to land a (different) research job:
Given 10 128-character strings which have been permutated in exactly the same way, decode the strings. The original strings are English text with spaces, numbers, punctuation and other non-alpha characters removed.
He was given a few days to think about it before an answer was expected. How would you do this? You can use any computer resource, including character/word level language models.
This is a basic transposition cipher. My question above was simply to determine if it was a transposition cipher or a substitution cipher. Cryptanalysis of such systems is fairly straightforward. Others have already alluded to basic methods. Optimal approaches will attempt to place the hardest and rarest letters first, as these will tend to uniquely identify the letters around them, which greatly reduces the subsequent search space. Simply finding a place to place an "a" (no pun intended) is not hard, but finding a location for a "q", "z", or "x" is a bit more work.
The overarching goal for an algorithm's quality isn't to decipher the text, as it can be done by better than brute force methods, nor is it simply to be fast, but it should eliminate possibilities absolutely as fast as possible.
Since you can use multiple strings simultaneously, attempting to create words from the rarest characters is going to allow you to test dictionary attacks in parallel. Finding the correct placement of the rarest terms in each string as quickly as possible will decipher that ciphertext PLUS all of the others at the same time.
If you search for cryptanalysis of transposition ciphers, you'll find a bunch with genetic algorithms. These are meant to advance the research cred of people working in GA, as these are not really optimal in practice. Instead, you should look at some basic optimizatin methods, such as branch and bound, A*, and a variety of statistical methods. (How deep you should go depends on your level of expertise in algorithms and statistics. :) I would switch between deterministic methods and statistical optimization methods several times.)
In any case, the calculations should be dirt cheap and fast, because the scale of initial guesses could be quite large. It's best to have a cheap way to filter out a LOT of possible placements first, then spend more CPU time on sifting through the better candidates. To that end, it's good to have a way of describing the stages of processing and the computational effort for each stage. (At least that's what I would expect if I gave this as an interview question.)
You can even buy a fairly credible reference book on deciphering double transposition ciphers.
Update 1: Take a look at these slides for more ideas on iterative improvements. It's not a great reference set of slides, but it's readily accessible. What's more, although the slides are about GA and simulated annealing (methods that come up a lot in search results for transposition cipher cryptanalysis), the author advocates against such methods when you can use A* or other methods. :)
first, you'd need a test for the correct ordering. something fairly simple like being able to break the majority of texts into words using a dictionary ordered by frequency of use without backtracking.
one you have that, you can play with various approaches. two i would try are:
using a genetic algorithm, with scoring based on 2 and 3-letter tuples (which you can either get from somewhere or generate yourself). the hard part of genetic algorithms is finding a good description of the process that can be fragmented and recomposed. i would guess that something like "move fragment x to after fragment y" would be a good approach, where the indices are positions in the original text (and so change as the "dna" is read). also, you might need to extend the scoring with something that gets you closer to "real" text near the end - something like the length over which the verification algorithm runs, or complete words found.
using a graph approach. you would need to find a consistent path through the graph of letter positions, perhaps with a beam-width search, using the weights obtained from the pair frequencies. i'm not sure how you'd handle reaching the end of the string and restarting, though. perhaps 10 sentences is sufficient to identify with strong probability good starting candidates (from letter frequency) - wouldn't surprise me.
this is a nice problem :o) i suspect 10 sentences is a strong constraint (for every step you have a good chance of common letter pairs in several strings - you probably want to combine probabilities by discarding the most unlikely, unless you include word start/end pairs) so i think the graph approach would be most efficient.
Frequency analysis would drastically prune the search space. The most-common letters in English prose are well-known.
Count the letters in your encrypted input, and put them in most-common order. Matching most-counted to most-counted, translated the cypher text back into an attempted plain text. It will be close to right, but likely not exactly. By hand, iteratively tune your permutation until plain text emerges (typically few iterations are needed.)
If you find checking by hand odious, run attempted plain texts through a spell checker and minimize violation counts.
First you need a scoring function that increases as the likelihood of a correct permutation increases. One approach is to precalculate the frequencies of triplets in standard English (get some data from Project Gutenburg) and add up the frequencies of all the triplets in all ten strings. You may find that quadruplets give a better outcome than triplets.
Second you need a way to produce permutations. One approach, known as hill-climbing, takes the ten strings and enters a loop. Pick two random integers from 1 to 128 and swap the associated letters in all ten strings. Compute the score of the new permutation and compare it to the old permutation. If the new permutation is an improvement, keep it and loop, otherwise keep the old permutation and loop. Stop when the number of improvements slows below some predetermined threshold. Present the outcome to the user, who may accept it as given, accept it and make changes manually, or reject it, in which case you start again from the original set of strings at a different point in the random number generator.
Instead of hill-climbing, you might try simulated annealing. I'll refer you to Google for details, but the idea is that instead of always keeping the better of the two permutations, sometimes you keep the lesser of the two permutations, in the hope that it leads to a better overall outcome. This is done to defeat the tendency of hill-climbing to get stuck at a local maximum in the search space.
By the way, it's "permuted" rather than "permutated."
Given a pseudo-random binary sequence (e.g.: 00101010010101) of finite values, predict how the sequence will continue. Can someone please tell me the easiest way to do it? Or in case it's too difficult for someone who can barely play solitaire on its computer, can someone tell me where to get my first steps...
PS: can this technique be used to predict the colour of the next electronic roulette number (e.g.: assigning 1 and 0 to red and black respectively)?
Cryptographically secure pseudorandom number generators are intended specifically to make what you want to do impossible. In particular, they satisfy the "next bit test": given k bits of their output, you cannot guess bit k+1 with probability greater than 1/2.
Plain pseudorandom number generators that do not satisfy the next bit test can be attacked and in fact security vulnerabilities have been discovered in real world systems due to the choice of PRNG. In particular, linear congruential generators are known to be somewhat (or completely) predictable, and some versions of Unix random may use this algorithm. This method is quite math intensive though. If you want to go down this path a search for "linear congruential generator prediction" is a place to start.
Another attack if you are aware of the PRNG implementation is to try to determine the seed used to generate the sequence you are analyzing. The seed is sometimes based on guessable information like time of day, process ID, etc.
Well, for pseudo-random sequences, the only possibility is to keep count how many of each possibility has come before. If the 1s outweigh the 0s, it's more likely that the next one will be 0. How much more likely depends on the relative occurrences of each.
Note that this won't work for true randomness since the events are independent, despite what the statisticians tell you :-)
You'll find that out (painfully) the first time you get a run of 13 reds on the table when you're using the double-on-loss method of playing roulette. In any case, the house derives its advantage from 0 (and double-0 on some tables) which are neither red nor black.
This is a decent question but I think if "you can barely play solitaire" it might be out of your reach right now.
You should look into picking up a basic language, and most are going to say PHP but I'm wary of recommending that to a beginner (it's pretty easy to get working though, see:XAMPP). Java is probably an "easy-to-get-running-and-work-with" language but I'm sure there's better threads on here about which language to start with (Python or something probably wins because experienced programmers love it).
By the way, your English is fine (I didn't notice you were a non-native English speaker).
Now, as for your question, if you're looking at true pattern matching. I'd be inclined to convert this idea to code:
"CURRENTPOINT" is end of first letter.
LOOP: Pick letter(s) from Start to "CURRENTPOINT"
Break the rest of your binary string into blocks of the same size.
See if these blocks all equal your picked letters.
If not, move "CURRENTPOINT" along and repeat the LOOP until you run out of letters.
If so, you have your "repeating section."
If you're just guessing that the random generator is temporarily biased, and that this bias will re-establish a baseline (balanced 0s and 1s) in the reasonably short-term then you can compare the count of each 0s and 1s and say the other is more likely based on the deviation from your baseline. However, be careful of the Monte Carlo fallacy.
To answer the PS first: No, because roulette spins are independent events so there's nothing predictive in the historical sequence of outcomes.
The general question is hard and interesting.
This website can infer a surprising number of sequences from their initial values:
http://www.research.att.com/~njas/sequences/
Note that it's for arbitrary integer sequences.
I tried it on simple patterns like {0,0,1,1,0,0,1,1,...} and it says the right thing.
I noticed that nobody told you about periodicity.
Pseudo-random sequence always works on mathematical operation. (until the quantic computer ^^)
An usual way to generate one is to divide two prime number (not sure it's the right word but whatever).
for instance
1/3=1.333333.....
9/7=1,2857142857142857142857142857143
Those are fairly small number and what do we notice? Periodicity.
1/3=1.3 3 3 3 3 3.....
9/7=1,2857 142857 142857 142857 142857 143
The more big is the prime number the more the sequence in that case: 3 and 142857 will be big
So if you look to a pseudo-random sequence for a long time you may find a periodicity and be able to "guess" the next number. But that could take a while.
PS: sorry for my English, I’m a bit rusty ^^
What you need to think about is the properties of randomness, study those. For example, "Randomness runs in bunches". Compare a random sequence against a predictable sequence: you won't normally find bunches in the predictable one. To take advantage of bunches wait for the bunch. And with a little luck you will win.
with SHA-1 is it possible to figure out which finite strings will render equal hashes?
What you are looking for is the solution to the Collision Problem (See also collision attack). A well-designed and powerful cryptographic hash function is designed with the intent of as much obfuscating mathematics as possible to make this problem as hard as possible.
In fact, one of the measures of a good hash function is the difficulty of finding collisions. (Among the other measures, the difficulty of reversing the hash function)
It should be noted that, in hashes where the input is any length of string and the output is a fixed-length string, the Pigeonhole Principle ensures that there is at least one collision for any given string. However, finding this string is not as easy, as it would require basically blind guess-and-check over a basically infinite collection of strings.
It might be useful to read into the the ideal hash functions. Hash functions are designed to be functions where
Small changes in the input cause radical, chaotic changes in the output
Collisions are reduced to a minimum
It is difficult or, ideally, impossible to reverse
There are no hashed values that are impossible to obtain with any inputs (this one matters significantly less for cryptographic purposes)
The theoretical "perfect" hash algorithm would be a "random oracle" -- that is, for every input, it outputs a perfectly random output, on the condition that for the same input, the output will be identical (this condition is fulfilled with magic, by the hand of Zeus and pixie fairies, or in a way that no human could ever possibly understand or figure out)
Unfortunately, this is pretty much impossible, and ultimately, all hashes are judged as "strong" based on how much of these qualities they possess, and to what degree.
A hash like SHA1 or MD5 is going to be pretty strong, and more or less computationally impossible to find collisions for (within a reasonable time frame). Ultimately, you don't need to find a hash that is impossible to find collisions for. You only practically need one where the difficulty of it is large enough that it'd be too expensive to compute (ie, on the order of a billion or a trillion years to find a collision)
Due to all hashes being imperfect, one could analyze the internal workings of it and see mathematical patterns and heuristics and try to find collisions along that pattern. This is similar to a hash function being %7...Hashing the number 13 would be 13%7 = 6, 89%7 = 5. If you saw a hash of 3, you could use your mathematical understanding of the modulus function to easily find a collision (ie, 10)1. Fortunately for us, stronger hash functions have much, much, much harder to understand mathematical basis. (Ideally, so hard that no human would ever understand it!)
Some figures:
Finding a collision for a single given SHA-0 hash takes about 13 full days of running computations on the top supercomputers in the world, using the patterns inherent in the math.
According to a helpful commenter, MD5 collisions can be generated "quickly" enough to be less than ideal for sensitive purposes.
No feasible or practical/usable collision finding method for SHA-1 has been found or proven so far, although, as pointed out in the comments, there are some weaknesses that have been discovered.
Here is a similar SO question, which has answers much wiser than mine.
1note that, while this hash function is weak for collisions, it is strong it that it is perfectly impossible to go backwards and find a given key, if your hash is, say, 4. There are an infinite amount (ie, 4, 11, 18, 25...)
The answer is pretty clearly yes, since at the very least you could run through every possible string of the given length, compute the hashes of all of them, and then see which are the same. The more interesting question is how to do it quickly.
Further reading: http://en.wikipedia.org/wiki/Collision_attack
It depends on the hash function. With a simple hash function, it may be possible. For example, if the hash function simply sums the ASCII byte values of a string, then one could enumerate all strings of a given length that produce a given hash value. If the hash function is more complex and "cryptographically strong" (e.g., MD5 or SHA1), then it is theoretically not possible.
Most hashes are of cryptographic or near-cryptographic strength, so the hash depends on the input in non-obvious ways. The way this is done professionally is with rainbow tables, which are precomputed tables of inputs and their hashes. So brute force checking is basically the only way.
What is the best algorithm to take a long sequence of integers (say 100,000 of them) and return a measurement of how random the sequence is?
The function should return a single result, say 0 if the sequence is not all all random, up to, say 1 if perfectly random. It can give something in-between if the sequence is somewhat random, e.g. 0.95 might be a reasonably random sequence, whereas 0.50 might have some non-random parts and some random parts.
If I were to pass the first 100,000 digits of Pi to the function, it should give a number very close to 1. If I passed the sequence 1, 2, ... 100,000 to it, it should return 0.
This way I can easily take 30 sequences of numbers, identify how random each one is, and return information about their relative randomness.
Is there such an animal?
…..
Update 24-Sep-2019: Google may have just ushered in an era of quantum supremacy says:
"Google’s quantum computer was reportedly able to solve a calculation — proving the randomness of numbers produced by a random number generator — in 3 minutes and 20 seconds that would take the world’s fastest traditional supercomputer, Summit, around 10,000 years. This effectively means that the calculation cannot be performed by a traditional computer, making Google the first to demonstrate quantum supremacy."
So obviously there is an algorithm to "prove" randomness. Does anyone know what it is? Could this algorithm also provide a measure of randomness?
Your question answers itself. "If I were to pass the first 100,000 digits of Pi to the function, it should give a number very close to 1", except the digits of Pi are not random numbers so if your algorithm does not recognise a very specific sequence as being non-random then it's not very good.
The problem here is there are many types of non random-ness:-
eg. "121,351,991,7898651,12398469018461" or "33,27,99,3000,63,231" or even "14297141600464,14344872783104,819534228736,3490442496" are definitely not random.
I think what you need to do is identify the aspects of randomness that are important to you-
distribution, distribution of digits, lack of common factors, the expected number of primes, Fibonacci and other "special" numbers etc. etc.
PS. The Quick and Dirty (and very effective) test of randomness does the file end up roughly the same size after you gzip it.
It can be done this way:
CAcert Research Lab does a Random Number Generator Analysis.
Their results page evaluates each random sequence using 7 tests (Entropy, Birthday Spacing, Matrix Ranks, 6x8 Matrix Ranks, Minimum Distance, Random Spheres, and the Squeeze). Each test result is then color coded as one of "No Problems", "Potentially deterministic" and "Not Random".
So a function can be written that accepts a random sequence and does the 7 tests.
If any of the 7 tests are "Not Random" then the function returns a 0. If all of the 7 tests are "No Problems", then it returns a 1. Otherwise, it can return some number in-between based on how many tests come in as "Potentially Deterministic".
The only thing missing from this solution is the code for the 7 tests.
You could try to zip-compress the sequence. The better you succeed the less random the sequence is.
Thus, heuristic randomness = length of zip-code/length of original sequence
As others have pointed out, you can't directly calculate how random a sequence is but there are several statistical tests that you could use to increase your confidence that a sequence is or isn't random.
The DIEHARD suite is the de facto standard for this kind of testing but it neither returns a single value nor is it simple.
ENT - A Pseudorandom Number Sequence Test Program, is a simpler alternative that combines 5 different tests. The website explains how each of these tests works.
If you really need just a single value, you could pick one of the 5 ENT tests and use that. The Chi-Squared test would probably be the best to use, but that might not meet the definition of simple.
Bear in mind that a single test is not as good as running several different tests on the same sequence. Depending on which test you choose, it should be good enough to flag up obviously suspicious sequences as being non-random, but might not fail for sequences that superficially appear random but actually exhibit some pattern.
You can treat you 100.000 outputs as possible outcomes of a random variable and calculate associated entropy of it. It will give you a measure of uncertainty. (Following image is from wikipedia and you can find more information on Entropy there.) Simply:
You just need to calculate the frequencies of each number in the sequence. That will give you p(xi) (e.g. If 10 appears 27 times p(10) = 27/L where L is 100.000 for your case.) This should give you the measure of entropy.
Although it will not give you a number between 0 to 1. Still 0 will be minimal uncertainty. However the upper bound will not be 1. You need to normalize the output to achieve that.
What you seek doesn't exist, at least not how you're describing it now.
The basic issue is this:
If it's random then it will pass tests for randomness; but the converse doesn't hold -- there's no test that can verify randomness.
For example, one could have very strong correlations between elements far apart and one would generally have to test explicitly for this. Or one could have a flat distribution but generated in a very non-random way. Etc, etc.
In the end, you need to decide on what aspects of randomness are important to you, and test for these (as James Anderson describes in his answer). I'm sure if you think of any that aren't obvious how to test for, people here will help.
Btw, I usually approach this problem from the other side: I'm given some set of data that looks for all I can see to be completely random, but I need to determine whether there's a pattern somewhere. Very non-obvious, in general.
"How random is this sequence?" is a tough question because fundamentally you're interested in how the sequence was generated. As others have said it's entirely possible to generate sequences that appear random, but don't come from sources that we'd consider random (e.g. digits of pi).
Most randomness tests seek to answer a slightly different questions, which is: "Is this sequence anomalous with respect to a given model?". If you're model is rolling ten sided dice, then it's pretty easy to quantify how likely a sequence is generated from that model, and the digits of pi would not look anomalous. But if your model is "Can this sequence be easily generated from an algorithm?" it becomes much more difficult.
I want to emphasize here that the word "random" means not only identically distributed, but also independent of everything else (including independent of any other choice).
There are numerous "randomness tests" available, including tests that estimate p-values from running various statistical probes, as well as tests that estimate min-entropy, which is roughly a minimum "compressibility" level of a bit sequence and the most relevant entropy measure for "secure random number generators". There are also various "randomness extractors", such as the von Neumann and Peres extractors, that could give you an idea on how much "randomness" you can extract from a bit sequence. However, all these tests and methods can only be more reliable on the first part of this definition of randomness ("identically distributed") than on the second part ("independent").
In general, there is no algorithm that can tell, from a sequence of numbers alone, whether the process generated them in an independent and identically distributed way, without knowledge on what that process is. Thus, for example, although you can tell that a given sequence of bits has more zeros than ones, you can't tell whether those bits—
Were truly generated independently of any other choice, or
form part of an extremely long periodic sequence that is only "locally random", or
were simply reused from another process, or
were produced in some other way,
...without more information on the process. As one important example, the process of a person choosing a password is rarely "random" in this sense since passwords tend to contain familiar words or names, among other reasons.
Also I should discuss the article added to your question in 2019. That article dealt with the task of sampling from the distribution of bit strings generated by pseudorandom quantum circuits, and doing so with a low rate of error (a task specifically designed to be exponentially easier for quantum computers than for classical computers), rather than the task of "verifying" whether a particular sequence of bits (taken out of its context) was generated "at random" in the sense given in this answer. There is an explanation on what exactly this "task" is in a July 2020 paper.
In Computer Vision when analysing textures, the problem of trying to gauge the randomness of a texture comes up, in order to segment it. This is exactly the same as your question, because you are trying to determine the randomness of a sequence of bytes/integers/floats. The best discussion I could find of image entropy is http://www.physicsforums.com/showthread.php?t=274518 .
Basically, its the statistical measure of randomness for a sequence of values.
I would also try autocorrelation of the sequence with itself. In the autocorrelation result, if there is no peaks other than the first value that means there is no periodicity to your input.
I would use Claude Shannon’s Information Entropy algorithm. You can find the calculation on Youtube easily. I guess it really depends upon why you want this to be measured, and what type of reporting you want to do with the data points you collect.
#JohnFx "... mathematically impossible."
poster states: take a long sequence of integers ...
Thus, just as limits are used in The Calculus, we can take the value as being the value - the study of Chaotics shows us finite limits may 'turn on themselves' producing tensor fields that provide the illusion of absolute(s), and which can be run as long as there is time and energy. Due to the curvature of space-time, there is no perfection - hence the op's "... say 1 if perfectly random." is a misnomer.
{ noted: ample observations on that have been provided - spare me }
According to your position, given two byte[] of a few k, each randomized independently - op could not obtain "a measurement of how random the sequence is" The article at Wiki is informative, and makes definite strides dis-entagling the matter, but
In comparison to classical physics, quantum physics predicts that the properties of a quantum mechanical system depend on the measurement context, i.e. whether or not other system measurements are carried out.
A team of physicists from Innsbruck,
Austria, led by Christian Roos and
Rainer Blatt, have for the first time
proven in a comprehensive experiment
that it is not possible to explain
quantum phenomena in non-contextual
terms.
Source: Science Daily
Let us consider non-random lizard movements. The source of the stimulus that initiates complex movements in the shed tails of leopard geckos, under your original, corrected hyper-thesis, can never be known. We, the experienced computer scientists, suffer the innocent challenge posed by newbies knowing too well that there - in the context of an un-tainted and pristine mind - are them gems and germinators of feed-forward thinking.
If the thought-field of the original lizard produces a tensor-field ( deal with it folks, this is front-line research in sub-linear physics ) then we could have "the best algorithm to take a long sequence" of civilizations spanning from the Toba Event to present through a Chaotic Inversion". Consider the question whether such a thought-field produced by the lizard, taken independently, is a spooky or knowable.
"Direct observation of Hardy's paradox
by joint weak measurement with an
entangled photon pair," authored by
Kazuhiro Yokota, Takashi Yamamoto,
Masato Koashi and Nobuyuki Imoto from
the Graduate School of Engineering
Science at Osaka University and the
CREST Photonic Quantum Information
Project in Kawaguchi City
Source: Science Daily
( considering the spooky / knowable dichotomy )
I know from my own experiments that direct observation weakens the absoluteness of perceptible tensors, distinguishing between thought and perceptible tensors is impossible using only single focus techniques because the perceptible tensor is not the original thought. A fundamental consequence of quantaeus is that only weak states of perceptible tensors can be reliably distinguished from one another without causing a collapse into a unified perceptible tensor. Try it sometime - work on the mainifestation of some desired eventuality, using pure thought. Because an idea has no time or space, it is therefore in-finite. ( not-finite ) and therefore can attain "perfection" - i.e. absoluteness. Just for a hint, start with the weather as that is the easiest thing to influence ( as least as far as is currently known ) then move as soon as can be done to doing a join from the sleep-state to the waking-state with virtually no interruption of sequential chaining.
There is an almost unavoidable blip there when the body wakes up but it is just like when the doorbell rings, speaking of which brings an interesting area of statistical research to funding availability: How many thoughts can one maintain synchronously? I find that duality is the practical working limit, at triune it either breaks on the next thought or doesn't last very long.
Perhaps the work of Yokota et al could reveal the source of spurious net traffic...maybe it's ghosts.
As per Knuth, make sure you test the low-order bits for randomness, since many algorithms exhibit terrible randomness in the lowest bits.
Although this question is old, it does not seem "solved", so here is my 2 cents, showing that it is still an important problem that can be discussed in simple terms.
Consider password security.
The question was about "long" number sequences, "say 100.000", but does not state what is the criterium for "long". For passwords, 8 characters might be considered long. If those 8 chars were "random", it might be considered a good password, but if it can be easily guessed, a useless password.
Common password rules are to mix upper case, numbers and special characters. But the commonly used "Password1" is still a bad password. (okay, 9-char example, sorry) So how many of the methods of the other answers you apply, you should also check if the password occurs in several dictionaries, including sets of leaked passwords.
But even then, just imagine the rise of a new Hollywood star. This may lead to a new famous name that will be given to newborns, and may become popular as a password, that is not yet in the dictionaries.
If I am correctly informed, it is pretty much impossible to automatically verify that a password selected by a human is random and not derived with an easy to guess algorithm. And also that a good password system should work with computer-generated random passwords.
The conclusion is that there is no method to verify if an 8-char password is random, let alone a good and simple method. And if you cannot verify 8 characters, why would it be easier to verify 100.000 numbers?
The password example is just one example of how important this question of randomness is; think also about encryption. Randomness is the holy grail of security.
Measuring randomness? In order to do so, you should fully understand its meaning. The problem is, if you search the internet you will reach the conclusion that there is a nonconformity concept of randomness. For some people it's one thing, for others it's something else. You'll even find some definitions given through a philosophical perspective. One of the most frequent misleading concepts is to test if "it's random or not random". Randomness is not a "yes" or a "no", it could be anything in between. Although it is possible to measure and quantify "randomness", its concept should remain relative regarding its classification and categorization. So, to say that something is random or not random in an absolute way would be wrong because it's relative and even subjective for that matter. Accordingly, it is also subjective and relative to say that something follows a pattern or doesn't because, what's a pattern? In order to measure randomness, you have to start off by understanding it's mathematical theoretical premise. The premise behind randomness is easy to understand and accept. If all possible outcomes/elements in your sample space have the EXACT same probability of happening than randomness is achieved to it's fullest extent. It's that simple. What is more difficult to understand is linking this concept/premise to a certain sequence/set or a distribution of outcomes of events in order to determine a degree of randomness. You could divide your sample into sets or subsets and they could prove to be relatively random. The problem is that even if they prove to be random by themselves, it could be proven that the sample is not that random if analyzed as a whole. So, in order to analyze the degree of randomness, you should consider the sample as a whole and not subdivided. Conducting several tests to prove randomness will necessarily lead to subjectiveness and redundancy. There are no 7 tests or 5 tests, there is only one. And that test follows the already mentioned premise and thus determines the degree of randomness based on the outcome distribution type or in other words, the outcome frequency distribution type of a given sample. The specific sequence of a sample is not relevant. A specific sequence would only be relevant if you decide to divide your sample into subsets, which you shouldn't, as I already explained. If you consider the variable p(possible outcomes/elements in sample space) and n(number of trials/events/experiments) you will have a number of total possible sequences of (p^n) or (p to the power of n). If we consider the already mentioned premise to be true, any of these possible sequences have the exact same probability of occurring. Because of this, any specific sequence would be inconclusive in order to calculate the "randomness" of a sample. What is essential is to calculate the probability of the outcome distribution type of a sample of happening. In order to do so, we would have to calculate all the sequences that are associated with the outcome distribution type of a sample. So if you consider s=(number of all possible sequences that lead to a outcome distribution type), then s/(p^n) would give you a value between 0 and 1 which should be interpreted as being a measurement of randomness for a specific sample. Being that 1 is 100% random and 0 is 0% random. It should be said that you will never get a 1 or a 0 because even if a sample represents the MOST likely random outcome distribution type it could never be proven as being 100%. And if a sample represents the LEAST likely random outcome distribution type it could never be proven as being 0%. This happens because since there are several possible outcome distribution types, no single one of them can represent being 100% or 0% random. In order to determine the value of variable (s), you should use the same logic used in multinominal distribution probabilities. This method applies to any number of possible outcomes/elements in sample space and to any number of experiments/trials/events. Notice that, the bigger your sample is, the more are the possible outcome frequency distribution types, and the less is the degree of randomness that can be proven by each one of them.
Calculating [s/(n^t)]*100 will give you the probability of the outcome frequency dirtibution type of a set occuring if the source is truly random. The higher the probability the more random your set is. To actually obtain a value of randomness you would have to divide [s/(n^t)] by the highest value [s/(n^t)] of all possible outcome frequency distibution types and multiply by 100.