reflexive hash? - algorithm

Is there a class of hash algorithms, whether theoretical or practical, such that an algo in the class might be considered 'reflexive' according a definition given below:
hash1 = algo1 ( "input text 1" )
hash1 = algo1 ( "input text 1" + hash1 )
The + operator might be concatenation or any other specified operation to combine the output (hash1) back into the input ("input text 1") so that the algorithm (algo1) will produce exactly the same result. i.e. collision on input and input+output.
The + operator must combine the entirety of both inputs and the algo may not discard part of the input.
The algorithm must produce 128 bits of entropy in the output.
It may, but need not, be cryptographically hard to reverse the output back to one or both possible inputs.
I am not a mathematician, but a good answer might include a proof of why such a class of algorithms cannot exist. This is not an abstract question, however. I am genuinely interested in using such an algorithm in my system, if one does exist.

Sure, here's a trivial one:
def algo1(input):
sum = 0
for i in input:
sum += ord(i)
return chr(sum % 256) + chr(-sum % 256)
Concatenate the result and the "hash" doesn't change. It's pretty easy to come up with something similar when you can reverse the hash.

Yes, you can get this effect with a CRC.
What you need to do is:
Implement an algorithm that will find a sequence of N input bits leading from one given state (of the N-bit CRC accumulator) to another.
Compute the CRC of your input in the normal way. Note the final state (call it A)
Using the function implemented in (1), find a sequence of bits that lead from A to A. This sequence is your hash code. You can now append it to the input.
[Initial state] >- input string -> [A] >- hash -> [A] ...
Here is one way to find the hash. (Note: there is an error in the numbers in the CRC32 example, but the algorithm works.)
And here's an implementation in Java. Note: I've used a 32-bit CRC (smaller than the 64 you specify) because that's implemented in the standard library, but with third-party library code you can easily extend it to larger hashes.
public static byte[] hash(byte[] input) {
CRC32 crc = new CRC32();
crc.update(input);
int reg = ~ (int) crc.getValue();
return delta(reg, reg);
}
public static void main(String[] args) {
byte[] prefix = "Hello, World!".getBytes(Charsets.UTF_8);
System.err.printf("%s => %s%n", Arrays.toString(prefix), Arrays.toString(hash(prefix)));
byte[] suffix = hash(prefix);
byte[] combined = ArrayUtils.addAll(prefix, suffix);
System.err.printf("%s => %s%n", Arrays.toString(combined), Arrays.toString(hash(combined)));
}
private static byte[] delta(int from, int to) {
ByteBuffer buf = ByteBuffer.allocate(8);
buf.order(ByteOrder.LITTLE_ENDIAN);
buf.putInt(from);
buf.putInt(to);
for (int i = 8; i-- > 4;) {
int e = CRCINVINDEX[buf.get(i) & 0xff];
buf.putInt(i - 3, buf.getInt(i - 3) ^ CRC32TAB[e]);
buf.put(i - 4, (byte) (e ^ buf.get(i - 4)));
}
return Arrays.copyOfRange(buf.array(), 0, 4);
}
private static final int[] CRC32TAB = new int[0x100];
private static final int[] CRCINVINDEX = new int[0x100];
static {
CRC32 crc = new CRC32();
for (int b = 0; b < 0x100; ++ b) {
crc.update(~b);
CRC32TAB[b] = 0xFF000000 ^ (int) crc.getValue();
CRCINVINDEX[CRC32TAB[b] >>> 24] = b;
crc.reset();
}
}

Building on ephemiat's answer, I think you can do something like this:
Pick your favorite symmetric key block cipher (e.g.: AES) . For concreteness, let's say that it operates on 128-bit blocks. For a given key K, denote the encryption function and decryption function by Enc(K, block) and Dec(K, block), respectively, so that block = Dec(K, Enc(K, block)) = Enc(K, Dec(K, block)).
Divide your input into an array of 128-bit blocks (padding as necessary). You can either choose a fixed key K or make it part of the input to the hash. In the following, we'll assume that it's fixed.
def hash(input):
state = arbitrary 128-bit initialization vector
for i = 1 to len(input) do
state = state ^ Enc(K, input[i])
return concatenate(state, Dec(K, state))
This function returns a 256-bit hash. It should be not too hard to verify that it satisfies the "reflexivity" condition with one caveat -- the inputs must be padded to a whole number of 128-bit blocks before the hash is adjoined. In other words, instead of hash(input) = hash(input + hash(input)) as originally specified, we have hash(input) = hash(input' + hash(input)) where input' is just the padded input. I hope this isn't too onerous.

Well, I can tell you that you won't get a proof of nonexistence. Here's an example:
operator+(a,b): compute a 64-bit hash of a, a 64-bit hash of b, and concatenate the bitstrings, returning an 128-bit hash.
algo1: for some 128-bit value, ignore the last 64 bits and compute some hash of the first 64.
Informally, any algo1 that yields the first operator to + as its first step will do. Maybe not as interesting a class as you were looking for, but it fits the bill. And it's not without real-world instances either. Lots of password hashing algorithms truncate their input.

I'm pretty sure that such a "reflexive hash" function (if it did exist in more than the trivial sense) would not be a useful hash function in the normal sense.
For an example of a "trivial" reflexive hash function:
int hash(Object obj) { return 0; }

Related

What hash function "hacks" do you use?

Hash function can be dependent on data. For example (from this article) if your data are all strings and almost all of them are of different lengths then a simple string length could be a very good hash function (not very realistic I know). Or for example real numbers from 0 to 1 could have a simple hash function:
value * sizeOfHashTable
I am interested if you use such hash functions that are tailor made around your inputs? Any more examples?
As you correctly noted, hash function depends on hashed data.
Common idea to design a good hash function - to comply 3 conditions:
Function must be easy to compute. Maybe, better to use not very good hash, but compute it quick, and save more time on hashing, than lost on imbalanced buckets or table paths.
Function must have good distribution (pseudorandom) on test dataset. Good idea - to use in the hash function "snow-crash effect", when changing a single bit in the input data changes ~half bits in the output value.
For external input data, hash function must be "universal", i.e. resist to attempt generate collision.
My favorite hash function is following. Before 1st use, needed initialize the table S_block with some random values. Good idea to do it at each program run.
const unsigned int S_block[256] = { ... };
#define NLF(h, c) (S_block[(unsigned char)(c + h)] ^ c)
unsigned int hash(const char *key) {
unsigned int h = 0x1F351F35;
char c;
while(c = *key++)
h = ((h << 7) | (h >> (32 - 7))) + NLF(h, c);
h ^= h >> 16;
return h ^ (h >> 8);
}
As practical example, see using variation of this function in the my program emcSSH. The file htable.c contains variation of this function, suitable for the double hashing algorithm.

Encode a byte array with an alphabet, output should look randomly distributed

I'm encoding binary data b1, b2, ... bn using an alphabet. But since the binary representations of the bs are more or less sequential, a simple mapping of bits to chars results in very similar strings. Example:
encode(b1) => "QXpB4IBsu0"
encode(b2) => "QXpB36Bsu0"
...
I'm looking for ways to make the output more "random", meaning more difficult to guess the input b when looking at the output string.
Some requirements:
For differenent bs, the output strings must be different. Encoding the same b multiple times does not necessarily have to result in the same output. As long as there are no collisions between the output strings of different input bs, everything is fine.
If it is of any importance: each b is around ~50-60 bits. The alphabet contains 64 characters.
The encoding function should not produce larger output strings than the ones you get by just using a simple mapping from the bits of bs to chars of the alphabet (given the values above, this means ~10 characters for each b). So just using a hash function like SHA is not an option.
Possible solutions for this problem don't need to be "cryptographically secure". If someone invests enough time and effort to reconstruct the binary data, then so be it. But the goal is to make it as difficult as possible. It maybe helps that a decode function is not needed anyway.
What I am doing at the moment:
take the next 4 bits from the binary data, let's say xxxx
prepend 2 random bits r to get rrxxxx
lookup the corresponding char in the alphabet: val char = alphabet[rrxxxx] and add it to the result (this works because the alphabet's size is 64)
continue with step 1
This appraoch adds some noise to the output string, however, the size of the string is increased by 50% due to the random bits. I could add more noise by adding more random bits (rrrxxx or even rrrrxx), but the output would get larger and larger. One of the requirements I mentioned above is not to increase the size of the output string. Currently I'm only using this approach because I have no better idea.
As an alternative procedure, I thought about shuffling the bits of an input b before applying the alphabet. But since it must be guaranteed that different bs result in different strings, the shuffle function should use some kind of determinism (maybe a secret number as an argument) instead of being completely random. I wasn't able to come up wih such a function.
I'm wondering if there is a better way, any hint is appreciated.
Basically, you need a reversible pseudo-random mapping from each possible 50-bit value to another 50-bit value. You can achieve this with a reversible Linear Congruential Generator (the kind used for some pseudo-random number generators).
When encoding, apply the LCG to your number in the forward direction, then encode with base64. If you need to decode, decode from base64, then apply the LCG in the opposite direction to get your original number back.
This answer contains some code for a reversible LCG. You'll need one with a period of 250. The constants used to define your LCG would be your secret numbers.
You want to use a multiplicative inverse. That will take the sequential key and transform it into a non-sequential number. There is a one-to-one relationship between the keys and their results. So no two numbers will create the same non-sequential key, and the process is reversible.
I have a small example, written in C#, that illustrates the process.
private void DoIt()
{
const long m = 101;
const long x = 387420489; // must be coprime to m
var multInv = MultiplicativeInverse(x, m);
var nums = new HashSet<long>();
for (long i = 0; i < 100; ++i)
{
var encoded = i*x%m;
var decoded = encoded*multInv%m;
Console.WriteLine("{0} => {1} => {2}", i, encoded, decoded);
if (!nums.Add(encoded))
{
Console.WriteLine("Duplicate");
}
}
}
private long MultiplicativeInverse(long x, long modulus)
{
return ExtendedEuclideanDivision(x, modulus).Item1%modulus;
}
private static Tuple<long, long> ExtendedEuclideanDivision(long a, long b)
{
if (a < 0)
{
var result = ExtendedEuclideanDivision(-a, b);
return Tuple.Create(-result.Item1, result.Item2);
}
if (b < 0)
{
var result = ExtendedEuclideanDivision(a, -b);
return Tuple.Create(result.Item1, -result.Item2);
}
if (b == 0)
{
return Tuple.Create(1L, 0L);
}
var q = a/b;
var r = a%b;
var rslt = ExtendedEuclideanDivision(b, r);
var s = rslt.Item1;
var t = rslt.Item2;
return Tuple.Create(t, s - q*t);
}
Code cribbed from the above-mentioned article, and supporting materials.
The idea, then, is to take your sequential number, compute the inverse, and then base-64 encode it. To reverse the process, base-64 decode the value you're given, run it through the reverse calculation, and you have the original number.

How to convert from any large arbitrary base to another

What I’d like to do is to convert a string from one "alphabet" to another, much like converting numbers between bases, but more abstract and with arbitrary digits.
For instance, converting "255" from the alphabet "0123456789" to the alphabet "0123456789ABCDEF" would result in "FF". One way to do this is to convert the input string into an integer, and then back again. Like so: (pseudocode)
int decode(string input, string alphabet) {
int value = 0;
for(i = 0; i < input.length; i++) {
int index = alphabet.indexOf(input[i]);
value += index * pow(alphabet.length, input.length - i - 1);
}
return value;
}
string encode(int value, string alphabet) {
string encoded = "";
while(value > 0) {
int index = value % alphabet.length;
encoded = alphabet[index] + encoded;
value = floor(value / alphabet.length);
}
return encoded;
}
Such that decode("255", "0123456789") returns the integer 255, and encode(255, "0123456789ABCDEF") returns "FF".
This works for small alphabets, but I’d like to be able to use base 26 (all the uppercase letters) or base 52 (uppercase and lowercase) or base 62 (uppercase, lowercase and digits), and values that are potentially over a hundred digits. The algorithm above would, theoretically, work for such alphabets, but, in practice, I’m running into integer overflow because the numbers get so big so fast when you start doing 62^100.
What I’m wondering is if there is an algorithm to do a conversion like this without having to keep up with such gigantic integers? Perhaps a way to begin the output of the result before the entire input string has been processed?
My intuition tells me that it might be possible, but my math skills are lacking. Any help would be appreciated.
There are a few similar questions here on StackOverflow, but none seem to be exactly what I'm looking for.
A general way to store numbers in an arbitrary base would be to store it as an array of integers. Minimally, a number would be denoted by a base and array of int (or short or long depending on the range of bases you want) representing different digits in that base.
Next, you need to implement multiplication in that arbitrary base.
After that you can implement conversion (clue: if x is the old base, calculate x, x^2, x^3,..., in the new base. After that, multiply digits from old base accordingly to these numbers and then add them up).
Java-like Pseudocode:
ArbitraryBaseNumber source = new ArbitraryBaseNumber(11,"103A");
ArbitraryBaseNumber target = new ArbitraryBaseNumber(3,"0");
for(int digit : base3Num.getDigitListAsIntegers()) { // [1,0,3,10]
target.incrementBy(digit);
if(not final digit) {
target.multiplyBy(source.base);
}
}
The challenge that remains, of course, is to implement ArbitraryBaseNumber, with incrementBy(int) and multiplyBy(int) methods. Essentially to do that, you do in code exactly what a schoolchild does when doing addition and long-multiplication on paper. Google and you'll find example.

Lazy Shuffle Algorithms

I have a large list of elements that I want to iterate in random order. However, I cannot modify the list and I don't want to create a copy of it either, because 1) it is large and 2) it can be expected that the iteration is cancelled early.
List<T> data = ...;
Iterator<T> shuffled = shuffle(data);
while (shuffled.hasNext()) {
T t = shuffled.next();
if (System.console().readLine("Do you want %s?", t).startsWith("y")) {
return t;
}
}
System.out.println("That's all");
return t;
I am looking for an algorithm were the code above would run in O(n) (and preferably require only O(log n)space), so caching the elements that were produced earlier is not an option. I don't care if the algorithm is biased (as long as it's not obvious).
(I uses pseudo-Java in my question, but you can use other languages if you wish)
Here is the best I got so far.
Iterator<T> shuffle(final List<T> data) {
int p = data.size();
while ((data.size() % p) == 0) p = randomPrime();
return new Iterator<T>() {
final int prime = p;
int n = 0, i = 0;
public boolean hasNext() { return i < data.size(); }
public T next() {
i++; n += prime;
return data.get(n);
}
}
}
Iterating all elements in O(n), constant space, but obviously biased as it can produce only data.size() permutations.
The easiest shuffling approaches I know of work with indices. If the List is not an ArrayList, you may end up with a very inefficient algorithm if you try to use one of the below (a LinkedList does have a get by ID, but it's O(n), so you'll end up with O(n^2) time).
If O(n) space is fine, which I'm assuming it's not, I'd recommend the Fisher-Yates / Knuth shuffle, it's O(n) time and is easy to implement. You can optimise it so you only need to perform a single operation before being able to get the first element, but you'll need to keep track of the rest of the modified list as you go.
My solution:
Ok, so this is not very random at all, but I can't see a better way if you want less than O(n) space.
It takes O(1) space and O(n) time.
There may be a way to push it up the space usage a little and get more random results, but I haven't figured that out yet.
It has to do with relative primes. The idea is that, given 2 relative primes a (the generator) and b, when you loop through a % b, 2a % b, 3a % b, 4a % b, ..., you will see every integer 0, 1, 2, ..., b-2, b-1, and this will also happen before seeing any integer twice. Unfortunately I don't have a link to a proof (the wikipedia link may mention or imply it, I didn't check in too much detail).
I start off by increasing the length until we get a prime, since this implies that any other number will be a relative prime, which is a whole lot less limiting (and just skip any number greater than the original length), then generate a random number, and use this as the generator.
I'm iterating through and printing out all the values, but it should be easy enough to modify to generate the next one given the current one.
Note I'm skipping 1 and len-1 with my nextInt, since these will produce 1,2,3,... and ...,3,2,1 respectively, but you can include these, but probably not if the length is below a certain threshold.
You may also want to generate a random number to multiply the generator by (mod the length) to start from.
Java code:
static Random gen = new Random();
static void printShuffle(int len)
{
// get first prime >= len
int newLen = len-1;
boolean prime;
do
{
newLen++;
// prime check
prime = true;
for (int i = 2; prime && i < len; i++)
prime &= (newLen % i != 0);
}
while (!prime);
long val = gen.nextInt(len-3) + 2;
long oldVal = val;
do
{
if (val < len)
System.out.println(val);
val = (val + oldVal) % newLen;
}
while (oldVal != val);
}
This is an old thread, but in case anyone comes across this in future, a paper by Andrew Kensler describes a way to do this in constant time and constant space. Essentially, you create a reversible hash function, and then use it (and not an array) to index the list. Kensler describes a method for generating the necessary function, and discusses "cycle walking" as a way to deal with a domain that is not identical to the domain of the hash function. Afnan Enayet's summary of the paper is here: https://afnan.io/posts/2019-04-05-explaining-the-hashed-permutation/.
You may try using a buffer to do this. Iterate through a limited set of data and put it in a buffer. Extract random values from that buffer and send it to output (or wherever you need it). Iterate through the next set and keep overwriting this buffer. Repeat this step.
You'll end up with n + n operations, which is still O(n). Unfortunately, the result will not be actually random. It will be close to random if you choose your buffer size properly.
On a different note, check these two: Python - run through a loop in non linear fashion, random iteration in Python
Perhaps there's a more elegant algorithm to do this better. I'm not sure though. Looking forward to other replies in this thread.
This is not a perfect answer to your question, but perhaps it's useful.
The idea is to use a reversible random number generator and the usual array-based shuffling algorithm done lazily: to get the i'th shuffled item, swap a[i] with and a randomly chosen a[j] where j is in [i..n-1], then return a[i]. This can be done in the iterator.
After you are done iterating, reset the array to original order by "unswapping" using the reverse direction of the RNG.
The unshuffling reset will never take longer than the original iteration, so it doesn't change asymptotic cost. Iteration is still linear in the number of iterations.
How to build a reversible RNG? Just use an encryption algorithm. Encrypt the previously generated pseudo-random value to go forward, and decrypt to go backward. If you have a symmetric encryption algorithm, then you can add a "salt" value at each step forward to prevent a cycle of two and subtract it for each step backward. I mention this because RC4 is simple and fast and symmetric. I've used it before for tasks like this. Encrypting 4-byte values then computing mod to get them in the desired range will be quick indeed.
You can press this into the Java iterator pattern by extending Iterator to allow resets. See below. Usage will look like:
ShuffledList<Integer> lst = new SuffledList<>();
... build the list with the usual operations
ResetableInterator<Integer> i = lst.iterator();
while (i.hasNext()) {
int val = i.next();
... use the randomly selected value
if (anyConditinoAtAll) break;
}
i.reset(); // Unshuffle the array
I know this isn't perfect, but it will be fast and give a good shuffle. Note that if you don't reset, the next iterator will still be a new random shuffle, but the original order will be lost forever. If the loop body can generate an exception, you'd want the reset in a finally block.
class ShuffledList<T> extends ArrayList<T> implements Iterable<T> {
#Override
public Iterator<T> iterator() {
return null;
}
public interface ResetableInterator<T> extends Iterator<T> {
public void reset();
}
class ShufflingIterator<T> implements ResetableInterator<T> {
int mark = 0;
#Override
public boolean hasNext() {
return true;
}
#Override
public T next() {
return null;
}
#Override
public void remove() {
throw new UnsupportedOperationException("Not supported.");
}
#Override
public void reset() {
throw new UnsupportedOperationException("Not supported yet.");
}
}
}

algorithm for generating a random numeric string, 10,000 chars in length?

Can be in any language or even pseudocode. I was asked this in an interview question, and was curious what you guys can come up with.
I think this is a trick question - the obvious answer of generating digits using a standard library routine is almost certainly flawed, if you want to generate every possible 10000 digit number with equal probability...
If an algorithmic random number generator maintains n bits of state, then clearly it can generate at most 2n possible different output sequences, because there are only 2n different initial configurations.
233219 < 1010000 < 233220, so if your algorithm uses less than 33220 bits of internal state, it cannot possibly generate some of the 1010000 possible 10000-digit (decimal) numbers.
Typical standard library random number generators won't use anything like this much internal state. Even the Mersenne Twister (the most frequently mentioned generator with a large state that I'm aware of) only keeps 624 32-bit words (= 19968 bits) of state.
Just one of many ways. You can pass in any string of the alphabet of characters you want to use:
public class RandomUtils
{
private static readonly Random random = new Random((int)DateTime.Now.Ticks);
public static string GenerateRandomDigitString(int length)
{
const string digits = "1234567890";
return GenerateRandomString(length, digits);
}
public static string GenerateRandomAlphaString(int length)
{
const string alpha = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
return GenerateRandomString(length, alpha);
}
public static string GenerateRandomString(int length, string alphabet)
{
int maxlen = alphabet.Length;
StringBuilder sb = new StringBuilder();
for (int i = 0; i < length; i++)
{
sb.Append(alphabet[random.Next(0, maxlen)]);
}
return sb.ToString();
}
}
Without additional requirements, this will work:
StringBuilder randomStr = new StringBuilder(10000);
Random rnd = new Random();
for(int i = 0; i<10000;i++)
{
char randomChar = rnd.AsChar();
randomStr[i] = randomChar;
}
This will result in unprintable characters and other unpleasentness. Using an ASCII encoder you can get letters, numbers and punctutaiton by sticking to the range 32 - 126. Or creating a random number between 0 and 94 and adding 32. Not sure which aspect they were looking for in the question.
BTW, No I did not know the visible range off the top of my head, I looked it up on wikipedia.
Generate a number in the range 0..9. Convert it to a digit. Stuff that into a string. Repeat 10000 times.
I always like saying Computer Random Numbers are always only pseudo-random. Anyway, your favourite language will invariably have a random library. Next what is a numeric string ? 0-9 valued for each character ? Well let's start with that assumption. So we can generate bytes between to Ascii codes of 0-9 with offset (48) and (int) random*10 (since random generators typically return floats). Then place these all in a char buffer 10000 long and convert to string.
Return a string containing 10,000 1s -- that's just as random as any other digit string of the same length.
I think the real question was to determine what the interviewer actually wanted. For example, random in what sense? Uncompressable? Random over multiple runs of the same algorithm? Etc.
You can start with a list of seed digits:
seeds = [4,9,3,1,2,5,5,4,4,8,4,3] # This should be relatively large
Then, use a counter to keep track of which digit was last used. This would be system-wide and shouldn't reset with the system:
def next_digit():
counter = 0
while True:
yield counter
counter += 1
pos_it = next_digit()
rand_it = next_digit()
Next, use an algorithm that uses modulus to determine the "next number":
def random_digit():
position = pos_it.next() % len(seeds)
digit = seeds[position] * rand_it.next()
return digit % 10
Last, generate 10,000 of those digits.
output = ""
for i in range(10000):
output = "%s%s" % (output, random_digit())
I believe that an ideal answer would use more prime numbers, but this should be pretty sufficient.

Resources