I implemented a Fisher-Yates shuffle recently, which used List.permute to shuffle the list, and noted that as the size of the list increased, there was a significant performance decrease. I suspect this is due to the fact that while the algorithm assumes it is operating on an array, permute must be accessing the list elements by index, which is O(n).
To confirm this, I tried out applying a permutation to a list to reverse its element, comparing working directly on the list, and transforming the list into an array and back to a list:
let permute i max = max - i - 1
let test = [ 0 .. 10000 ]
let rev1 list =
let perm i = permute i (List.length list)
List.permute perm list
let rev2 list =
let array = List.toArray list
let perm i = permute i (Array.length array)
Array.permute perm array |> Array.toList
I get the following results, which tend to confirm my assumption:
rev1 test;;
Real: 00:00:00.283, CPU: 00:00:00.265, GC gen0: 0, gen1: 0, gen2: 0
rev2 test;;
Real: 00:00:00.003, CPU: 00:00:00.000, GC gen0: 0, gen1: 0, gen2: 0
My question is the following:
1) Should List.permute be avoided for performance reasons? And, relatedly, shouldn't the implementation of List.permute automatically do the transformation into an Array behind the scenes?
2) Besides using an Array, is there a more functional way / data structure suitable for this type of work, i.e. shuffling of elements? Or is this simply a problem for which an Array is the right data structure to use?
List.permute converts the list to an array, calls Array.permute, then converts it back to a list. Based on that, you can probably figure out what you need to do (hint: work with arrays!).
Should List.permute be avoided for performance reasons?
The only performance problem here is in your own code, specifically calling List.length.
Besides using an Array, is there a more functional way / data structure suitable for this type of work, i.e. shuffling of elements? Or is this simply a problem for which an Array is the right data structure to use?
You are assuming that arrays cannot be used functionally when, in fact, they can by not mutating their elements. Consider the permute function:
let permute f (xs: _ []) = Array.init xs.Length (fun i -> xs.[f i])
Although it acts upon an array and produces an array it is not mutating anything so it is using an array as a purely functional data structure.
Related
I have a situation where I have a process which needs to "burn-in". This means that I
Start with p values, p relatively small
For n>p, generate nth value using most recently generated p values (e.g. p+1 generated from values 1 to p, p+2 generated from values 2, p+1, etc.)
Repeat until n=N, where N large
Now, only the most recently generated p values will be useful to me, so there are two ways for me to implement this. I can either
Start with a vector of p initial values. At each iteration, mutate the vector, removing the first element, and replacing the last element with the most recently generated value or,
Preallocate a large array of length N, where first p elements are initial values. At iteration n, mutate nth value with most recently generated value
There are pros and cons to both approaches.
Pros of the first, are that we only store most relevant values. Cons of the first are that we are changing the length of the vector at each iteration.
Pros of the second are that we preallocate all the memory we need. Cons of the second is that we store much more than we need.
What is the best way to proceed? Does it depend on what aspect of performance I most need to care about? What will be the quickest?
Cheers in advance.
edit: approximately, p is usually in the order of low tens, N can be several thousand
The first solution has another huge cons: removing the first item of an array takes O(n) time since elements should be moved in memory. This certainly cause the algorithm to runs in quadratic time which is not reasonable. Shifting the items as proposed by #ForceBru should also cause this quadratic run time (since many items are moved just to add one value every time).
The second solution should be pretty fast compared to the first but, indeed, it can use a lot of memory so it should be sub-optimal (it takes time to write values in the RAM).
A faster solution is to use a data structure called a deque. Such data structure enable you to remove the first item in constant time and append a new value at the end also in constant time. That being said, it also introduces some overhead to be able to do that. Julia provide such data structure (more especially queues).
Since the number of in-flight items appears to be bounded in your algorithm, you can implement a rolling buffer. Fortunately, Julia also implement this: see CircularBuffer. This solution should be quite simple and fast (since the operations you want to do are done in O(1) time on it).
It is probably simplest to use CircularArrays.jl for your use case:
julia> using CircularArrays
julia> c = CircularArray([1,2,3,4])
4-element CircularVector(::Vector{Int64}):
1
2
3
4
julia> for i in 5:10
c[i] = i
#show c
end
c = [5, 2, 3, 4]
c = [5, 6, 3, 4]
c = [5, 6, 7, 4]
c = [5, 6, 7, 8]
c = [9, 6, 7, 8]
c = [9, 10, 7, 8]
In this way - as you can see - you can can continue using an increasing index and array will wrap around internally as needed (discarding old values that are not needed any more).
In this way you always store last p values in the array without having to copy anything or re-allocate memory in each step.
...only the most recently generated p values will be useful to me...
Start with a vector of p initial values. At each iteration, mutate the vector, removing the first element, and replacing the last element with the most recently generated value.
Cons of the first are that we are changing the length of the vector at each iteration.
There's no need to change the length of the vector. Simply shift its elements to the left (overwriting the first element) and write the new data to the_vector[end]:
the_vector = [1,2,3,4,5,6]
function shift_and_add!(vec::AbstractVector, value)
vec[1:end-1] .= #view vec[2:end] # shift
vec[end] = value # replace the last value
vec
end
#assert shift_and_add!(the_vector, 80) == [2,3,4,5,6,80]
# `the_vector` will be mutated
#assert the_vector == [2,3,4,5,6,80]
Non-Functional way:
arr = [1, 2, 3] becomes arr = [1, 5, 3] . Here we change same array.
This is discouraged in functional programming. I know that since computers are becoming faster and faster every day and there is more memory to store, functional programming seems more feasible for better readability and clean code.
Functional way:
arr = [1, 2, 3] isn't changed arr2 = [1, 5, 3]. I see a general trend that we use more memory and time to just change one variable.
Here, we doubled our memory and the time complexity changed from O(1) to O(n).
This might be costly for bigger algorithms. Where is this compensated? Or since we can afford for costlier calculations (like when Quantum computing becomes mainstream), do we just trade speed off for readability?
Functional data structures don't necessarily take up a lot more space or require more processing time. The important aspect here is that purely functional data structures are immutable, but that doesn't mean you always make a complete copy of something. In fact, the immutability is precisely the key to working efficiently.
I'll provide as an example a simple list. Suppose we have the following list:
The head of the list is element 1. The tail of the list is (2, 3). Suppose this list is entirely immutable.
Now, we want to add an element at the start of that list. Our new list must look like this:
You can't change the existing list, it is immutable. So, we have to make a new one, right? However, note how the tail of our new list is (1, 2 ,3). That's identical to the old list. So, you can just re-use that. The new list is simply the element 0 with a pointer to the start of the old list as its tail. Here's the new list with various parts highlighted:
If our lists were mutable, this would not be safe. If you changed something in the old list (for example, replacing element 2 with a different one) the change would reflect in the new list as well. That's exactly where the danger is in mutability: concurrent access on data structures needs to be synchronized to avoid unpredictable results, and changes can have unintended side-effects. But, because that can't happen with immutable data structures, it's safe to re-use part of another structure in a new one. Sometimes you want changes in one thing to reflect in another; for example, when you remove an entry in the key set of a Map in Java, you want the mapping itself to be removed too. But in other situations mutability leads to trouble (the infamous Calendar class in Java).
So how can this work, if you can't change the data structure itself? How do you make a new list? Remember that if we're working purely functionally, we move away from the classical data structures with changeable pointers, and instead evaluate functions.
In functional languages, making lists is done with the cons function. cons makes a "cell" of two elements. If you want to make a list with only one element, the second one is nil. So a list with only one 3 element is:
(cons 3 nil)
If the above is a function and you ask what its head is, you get 3. Ask for the tail, you get nil. Now, the tail itself can be a function, like cons.
Our first list then is expressed as such:
(cons 1 (cons 2 (cons 3 nil)))
Ask the head of the above function and you get 1. Ask for the tail and you get (cons 2 (cons 3 nil)).
If we want to append 0 in the front, you just make a new function that evaluates to cons with 0 as head and the above as tail.
(cons 0 (cons 1 (cons 2 (cons 3 nil))))
Since the functions we make are immutable, our lists become immutable. Things like adding elements is a matter of making a new function that calls the old one in the right place. Traversing a list in the imperative and object-oriented way is going through pointers to get from one element to another. Traversing a list in the functional way is evaluating functions.
I like to think of data structures as this: a data structure is basically storing the result of running some algorithm in memory. It "caches" the result of computation, so we don't have to do the computation every time. Purely functional data structures model the computation itself via functions.
This in fact means that it can be quite memory efficient because a lot of data copying can be avoided. And with an increasing focus on parallelization in processing, immutable data structures can be very useful.
EDIT
Given the additional questions in the comments, I'll add a bit to the above to the best of my abilities.
What about my example? Is it something like cons(1 fn) and that function can be cons(2 fn2) where fn2 is cons(3 nil) and in some other case cons(5 fn2)?
The cons function is best compared to a single-linked list. As you might imagine, if you're given a list composed of cons cells, what you're getting is the head and thus random access to some index isn't possible. In your array you can just call arr[1] and get the second item (since it's 0-indexed) in the array, in constant time. If you state something like val list = (cons 1 (cons 2 (cons 3 nil))) you can't just ask the second item without traversing it, because list is now actually a function you evaluate. So access requires linear time, and access to the last element will take longer than access to the head element. Also, given that it's equivalent to a single-linked list, traversal can only be in one direction. So the behavior and performance is more like that of a single-linked list than of, say, an arraylist or array.
Purely functional data structures don't necessarily provide better performance for some operations such as indexed access. A "classic" data structure may have O(1) for some operation where a functional one may have O(log n) for the same one. That's a trade-off; functional data structures aren't a silver bullet, just like object-orientation wasn't. You use them where they make sense. If you're always going to traverse a whole list or part of it and want to be capable of safe parallel access, a structure composed of cons cells works perfectly fine. In functional programming, you'd often traverse a structure using recursive calls where in imperative programming you'd use a for loop.
There are of course many other functional data structures, some of which come much closer to modeling an array that allows random access and updates. But they're typically a lot more complex than the simple example above. There's of course advantages: parallel computation can be trivially easy thanks to immutability; memoization allows us to cache the results of function calls based on inputs since a purely functional approach always yields the same result for the same input.
What are we actually storing underneath? If we need to traverse a list, we need a mechanism to point to next elements right? or If I think a bit, I feel like it is irrelevant question to traverse a list since whenever a list is required it should probably be reconstructed everytime?
We store data structures containing functions. What is a cons? A simple structure consisting of two elements: a head and tail. It's just pointers underneath. In an object-oriented language like Java, you could model it as a class Cons that contains two final fields head and tail assigned on construction (immutable) and has corresponding methods to fetch these. This in a LISP variant
(cons 1 (cons 2 nil))
would be equivalent to
new Cons(1, new Cons(2, null))
in Java.
The big difference in functional languages is that functions are first-class types. They can be passed around and assigned to variables just like object references. You can compose functions. I could just as easily do this in a functional language
val list = (cons 1 (max 2 3))
and if I ask list.head I get 1, if I ask list.tail I get (max 2 3) and evaluating that just gives me 3. You compose functions. Think of it as modeling behavior instead of data. Which brings us to
Could you elaborate "Purely functional data structures model the computation itself via functions."?
Calling list.tail on our above list returns something that can be evaluated and then returns a value. In other words, it returns a function. If I call list.tail in that example it returns (max 2 3), clearly a function. Evaluating it yields 3 as that's the highest number of the arguments. In this example
(cons 1 (cons 2 nil))
calling tail evaluates to a new cons (the (cons 2 nil) one) which in turn can be used.
Suppose we want a sum of all the elements in our list. In Java, before the introduction of lambdas, if you had an array int[] array = new int[] {1, 2, 3} you'd do something like
int sum = 0;
for (int i = 0; i < array.length; ++i) {
sum += array[i];
}
In a functional language it would be something like (simplified pseudo-code)
(define sum (arg)
(eq arg nil
(0)
(+ arg.head (sum arg.tail))
)
)
This uses prefix notation like we've used with our cons so far. So a + b is written as (+ a b). define lets us define a function, with as arguments the name (sum), a list of arguments for the function ((arg)), and then the actual function body (the rest).
The function body consists of an eq function which we'll define as comparing its first two arguments (arg and nil) and if they're equal it evaluates to its next argument ((0) in this case), otherwise to the argument after that (the sum). So think of it as (eq arg1 arg2 true false) with true and false whatever you want (a value, a function...).
The recursion bit then comes in the sum (+ arg.head (sum arg.tail)). We're stating that we take the addition of the head of the argument with a recursive call to the sum function itself on the tail. Suppose we do this:
val list = (cons 1 (cons 2 (cons 3 nil)))
(sum list)
Mentally step through what that last line would do to see how it evaluates to the sum of all the elements in list.
Note, now, how sum is a function. In the Java example we had some data structure and then iterated over it, performing access on it, to create our sum. In the functional example the evaluation is the computation. A useful aspect of this is that sum as a function could be passed around and evaluated only when it's actually needed. That is lazy evaluation.
Another example of how data structures and algorithms are actually the same thing in a different form. Take a set. A set can contain only one instance of an element, for some definition of equality of elements. For something like integers it's simple; if they are the same value (like 1 == 1) they're equal. For objects, however, we typically have some equality check (like equals() in Java). So how can you know whether a set already contains an element? You go over each element in the set and check if it is equal to the one you're looking for.
A hash set, however, computes some hash function for each element and places elements with the same hash in a corresponding bucket. For a good hash function there will rarely be more than one element in a bucket. If you now provide some element and want to check if it's in the set, the actions are:
Get the hash of the provided element (typically takes constant time).
Find the hash bucket in the set for that hash (again should take constant time).
Check if there's an element in that bucket which is equal to the given element.
The requirement is that two equal elements must have the same hash.
So now you can check if something is in the set in constant time. The reason being that our data structure itself has stored some computation information: the hashes. If you store each element in a bucket corresponding to its hash, we have put some computation result in the data structure itself. This saves time later if we want to check whether the set contains an element. In that way, data structures are actually computations frozen in memory. Instead of doing the entire computation every time, we've done some work up-front and re-use those results.
When you think of data structures and algorithms as being analogous in this way, it becomes clearer how functions can model the same thing.
Make sure to check out the classic book "Structure and Interpetation of Computer Programs" (often abbreviated as SICP). It'll give you a lot more insight. You can read it for free here: https://mitpress.mit.edu/sicp/full-text/book/book.html
This is a really broad question with a lot of room for opinionated answers, but G_H provides a really nice breakdown of some of the differences
Could you elaborate "Purely functional data structures model the computation itself via functions."?
This is one of my favourite topics, so I'm happy to share an example in JavaScript because it will allow you to run the code here in the browser and see the answer for yourself
Below you will see a linked list implemented using functions. I use a couple Numbers for example data and I use a String so that I can log something to the console for you to see, but other that that, it's just functions – no fancy objects, no arrays, no other custom stuff.
const cons = (x,y) => f => f(x,y)
const head = f => f((x,y) => x)
const tail = f => f((x,y) => y)
const nil = () => {}
const isEmpty = x => x === nil
const comp = f => g => x => f(g(x))
const reduce = f => y => xs =>
isEmpty(xs) ? y : reduce (f) (f (y,head(xs))) (tail(xs))
const reverse = xs =>
reduce ((acc,x) => cons(x,acc)) (nil) (xs)
const map = f =>
comp (reverse) (reduce ((acc, x) => (cons(f(x), acc))) (nil))
// this function is required so we can visualise the data
// it effectively converts a linked-list of functions to readable strings
const list2str = xs =>
isEmpty(xs) ? 'nil' : `(${head(xs)} . ${list2str(tail(xs))})`
// example input data
const xs = cons(1, cons(2, cons(3, cons(4, nil))))
// example derived data
const ys = map (x => x * x) (xs)
console.log(list2str(xs))
// (1 . (2 . (3 . (4 . nil))))
console.log(list2str(ys))
// (1 . (4 . (9 . (16 . nil))))
Of course this isn't of practical use in real-world JavaScript, but that's beside the point. It's just showing you how functions alone could be used to represent complex data structures.
Here's another example of implementing rational numbers using nothing but functions and numbers – again, we're only using strings so we can convert the functional structure to a visual representation we can understand in the console - this exact scenario is examine thoroughly in the SICP book that G_H mentions
We even implement our higher-order data rat using cons. This shows how functional data structures can easily be made up of (composed of) other functional data structures
const cons = (x,y) => f => f(x,y)
const head = f => f((x,y) => x)
const tail = f => f((x,y) => y)
const mod = y => x =>
y > x ? x : mod (y) (x - y)
const gcd = (x,y) =>
y === 0 ? x : gcd(y, mod (y) (x))
const rat = (n,d) =>
(g => cons(n/g, d/g)) (gcd(n,d))
const numer = head
const denom = tail
const ratAdd = (x,y) =>
rat(numer(x) * denom(y) + numer(y) * denom(x),
denom(x) * denom(y))
const rat2str = r => `${numer(r)}/${denom(r)}`
// example complex data
let x = rat(1,2)
let y = rat(1,4)
console.log(rat2str(x)) // 1/2
console.log(rat2str(y)) // 1/4
console.log(rat2str(ratAdd(x,y))) // 3/4
In Matlab I wish to preallocate a 1x30 array of structures named P with the following structure fields:
imageSize: [128 128]
orientationsPerScale: [8 8 8 8]
numberBlocks: 4
fc_prefilt: 4
boundaryExtension: 32
G: [192x192x32 double]
G might not necessarily be 192x192x32, it could be 128x128x16 for example (though it will have 3 dimensions of type double).
I am doing the preallocation the following way:
P(30) = struct('imageSize', 0, 'orientationsPerScale', [0 0 0 0], ...
'numberBlocks', 0, 'fc_prefilt', 0, 'boundaryExtension', 0, 'G', []);
Is this the correct way of preallocating such a structure, or will there be performance issues relating to G being set to empty []? If there is a better way of allocating this structure please provide an example.
Also, the above approach seems to work (performance issues aside), however, the order of the field name / value pairs seems to be important, since rearranging them leads to error upon assignment after preallocation. Why is this so given that the items/values are referenced by name (not position)?
If G is set to Empty, the interpreter has no way of knowing what size data will be attributed to it later, so it probably will pack the array items tight in memory, and have to redo it all when it doesn't fit.
It's probably more efficient to define upper bounds for the dimensions of G beforehand, and set it to that size. The zeroes function could help.
Which implementation from scala.collection.mutable package should I take if I intend to do lots of by-index-deletions, like remove(i: Int), in a single-threaded environment? The most obvious choice, ListBuffer, says that it may take linear time depending on buffer size. Is there some collection with log(n) or even constant time for this operation?
Removal operators, including buf remove i, are not part of Seq, but it's actually part of Buffer trait under scala.mutable. (See Buffers)
See the first table on Performance Characteristics. I am guessing buf remove i has the same characteristic as insert, which are linear for both ArrayBuffer and ListBuffer.
As documented in Array Buffers, they use arrays internally, and Link Buffers use linked lists (that's still O(n) for remove).
As an alternative, immutable Vector may give you an effective constant time.
Vectors are represented as trees with a high branching factor. Every tree node contains up to 32 elements of the vector or contains up to 32 other tree nodes. [...] So for all vectors of reasonable size, an element selection involves up to 5 primitive array selections. This is what we meant when we wrote that element access is "effectively constant time".
scala> import scala.collection.immutable._
import scala.collection.immutable._
scala> def remove[A](xs: Vector[A], i: Int) = (xs take i) ++ (xs drop (i + 1))
remove: [A](xs: scala.collection.immutable.Vector[A],i: Int)scala.collection.immutable.Vector[A]
scala> val foo = Vector(1, 2, 3, 4, 5)
foo: scala.collection.immutable.Vector[Int] = Vector(1, 2, 3, 4, 5)
scala> remove(foo, 2)
res0: scala.collection.immutable.Vector[Int] = Vector(1, 2, 4, 5)
Note, however, a high constant time with lots of overhead may not win a quick linear access until the data size is significantly large.
Depending on your exact use case, you may be able to use LinkedHashMap from scala.collection.mutable.
Although you cannot remove by index, you can remove by a unique key in constant time, and it maintains a deterministic ordering when you iterate.
scala> val foo = new scala.collection.mutable.LinkedHashMap[String,String]
foo: scala.collection.mutable.LinkedHashMap[String,String] = Map()
scala> foo += "A" -> "A"
res0: foo.type = Map((A,A))
scala> foo += "B" -> "B"
res1: foo.type = Map((A,A), (B,B))
scala> foo += "C" -> "C"
res2: foo.type = Map((A,A), (B,B), (C,C))
scala> foo -= "B"
res3: foo.type = Map((A,A), (C,C))
Java's ArrayList effectively has constant time complexity if the last element is the one to be removed. Look at the following snippet copied from its source code,
int numMoved = size - index - 1;
if (numMoved > 0)
System.arraycopy(elementData, index+1, elementData, index,
numMoved);
elementData[--size] = null; // clear to let GC do its work
As you can see, if numMoved is equal to 0, remove will not shift and copy the array at all. This in some scenarios can be quite useful. For example, if you do not care about the ordering that much, to remove an element, you can always swap it with the last element, and then delete the last element from the ArrayList, which effectively makes the remove operation all the way constant time. I was hoping ArrayBuffer would do the same, unfortunately that is not the case.
I have got numbers in a specific range (usually from 0 to about 1000). An algorithm selects some numbers from this range (about 3 to 10 numbers). This selection is done quite often, and I need to check if a permutation of the chosen numbers has already been selected.
e.g one step selects [1, 10, 3, 18] and another one [10, 18, 3, 1] then the second selection can be discarded because it is a permutation.
I need to do this check very fast. Right now I put all arrays in a hashmap, and use a custom hash function: just sums up all the elements, so 1+10+3+18=32, and also 10+18+3+1=32. For equals I use a bitset to quickly check if elements are in both sets (I do not need sorting when using the bitset, but it only works when the range of numbers is known and not too big).
This works ok, but can generate lots of collisions, so the equals() method is called quite often. I was wondering if there is a faster way to check for permutations?
Are there any good hash functions for permutations?
UPDATE
I have done a little benchmark: generate all combinations of numbers in the range 0 to 6, and array length 1 to 9. There are 3003 possible permutations, and a good hash should generated close to this many different hashes (I use 32 bit numbers for the hash):
41 different hashes for just adding (so there are lots of collisions)
8 different hashes for XOR'ing values together
286 different hashes for multiplying
3003 different hashes for (R + 2e) and multiplying as abc has suggested (using 1779033703 for R)
So abc's hash can be calculated very fast and is a lot better than all the rest. Thanks!
PS: I do not want to sort the values when I do not have to, because this would get too slow.
One potential candidate might be this.
Fix a odd integer R.
For each element e you want to hash compute the factor (R + 2*e).
Then compute the product of all these factors.
Finally divide the product by 2 to get the hash.
The factor 2 in (R + 2e) guarantees that all factors are odd, hence avoiding
that the product will ever become 0. The division by 2 at the end is because
the product will always be odd, hence the division just removes a constant bit.
E.g. I choose R = 1779033703. This is an arbitrary choice, doing some experiments should show if a given R is good or bad. Assume your values are [1, 10, 3, 18].
The product (computed using 32-bit ints) is
(R + 2) * (R + 20) * (R + 6) * (R + 36) = 3376724311
Hence the hash would be
3376724311/2 = 1688362155.
Summing the elements is already one of the simplest things you could do. But I don't think it's a particularly good hash function w.r.t. pseudo randomness.
If you sort your arrays before storing them or computing hashes, every good hash function will do.
If it's about speed: Have you measured where the bottleneck is? If your hash function is giving you a lot of collisions and you have to spend most of the time comparing the arrays bit-by-bit the hash function is obviously not good at what it's supposed to do. Sorting + Better Hash might be the solution.
If I understand your question correctly you want to test equality between sets where the items are not ordered. This is precisely what a Bloom filter will do for you. At the expense of a small number of false positives (in which case you'll need to make a call to a brute-force set comparison) you'll be able to compare such sets by checking whether their Bloom filter hash is equal.
The algebraic reason why this holds is that the OR operation is commutative. This holds for other semirings, too.
depending if you have a lot of collisions (so the same hash but not a permutation), you might presort the arrays while hashing them. In that case you can do a more aggressive kind of hashing where you don't only add up the numbers but add some bitmagick to it as well to get quite different hashes.
This is only beneficial if you get loads of unwanted collisions because the hash you are doing now is too poor. If you hardly get any collisions, the method you are using seems fine
I would suggest this:
1. Check if the lengths of permutations are the same (if not - they are not equal)
Sort only 1 array. Instead of sorting another array iterate through the elements of the 1st array and search for the presence of each of them in the 2nd array (compare only while the elements in the 2nd array are smaller - do not iterate through the whole array).
note: if you can have the same numbers in your permutaions (e.g. [1,2,2,10]) then you will need to remove elements from the 2nd array when it matches a member from the 1st one.
pseudo-code:
if length(arr1) <> length(arr2) return false;
sort(arr2);
for i=1 to length(arr1) {
elem=arr1[i];
j=1;
while (j<=length(arr2) and elem<arr2[j]) j=j+1;
if elem <> arr2[j] return false;
}
return true;
the idea is that instead of sorting another array we can just try to match all of its elements in the sorted array.
You can probably reduce the collisions a lot by using the product as well as the sum of the terms.
1*10*3*18=540 and 10*18*3*1=540
so the sum-product hash would be [32,540]
you still need to do something about collisions when they do happen though
I like using string's default hash code (Java, C# not sure about other languages), it generates pretty unique hash codes.
so if you first sort the array, and then generates a unique string using some delimiter.
so you can do the following (Java):
int[] arr = selectRandomNumbers();
Arrays.sort(arr);
int hash = (arr[0] + "," + arr[1] + "," + arr[2] + "," + arr[3]).hashCode();
if performance is an issue, you can change the suggested inefficient string concatenation to use StringBuilder or String.format
String.format("{0},{1},{2},{3}", arr[0],arr[1],arr[2],arr[3]);
String hash code of course doesn't guarantee that two distinct strings have different hash, but considering this suggested formatting, collisions should be extremely rare