I'm looking for a function f that maps a 32-bit integer into another. The function should be bijective, and though the mapping should look approximately random, it is not required to be cryptographically secure. There is an important extra requirement: there should also be an easily computable function g that is the inverse of f. It is okay for g to be the same as f, though not required.
There are many options given the non-cryptographic requirement, including:
XOR (mentioned by Ilya)
Bit rotation (f: Rotate N bits to the right, g: Rotate N bits to the left)
Addition / subtraction (without overflow checking, so that int.MaxValue + 1 maps to int.MinValue and int.MinValue - 1 maps to int.MaxValue)
A block cipher (good call #Thomas, I was focused on the easy ones :-)
In the meantime I've found an ideal solution to my problem: use a linear congruential generator. They are easily reversible and the permutation looks sufficiently random for my needs.
Related
Which exponent(s) d will
require this many?
Would greatly appreciate any advice as to how to go about solving this problem.
assuming unsigned integers and simple power by squaring algo like:
DWORD powuu(DWORD a,DWORD b)
{
int i,bits=32;
DWORD d=1;
for (i=0;i<bits;i++)
{
d*=d;
if (DWORD(b&0x80000000)) d*=a;
b<<=1;
}
return d;
}
You need just replace a*b with modmul(a,b,n) or (a*b)%n so the answer is:
if exponent has k bits and l from them are set you need k+l multiplications
worst case is 2k multiplications for exponent (2^k)-1
For more info see related QAs:
Power by squaring for negative exponents
modular arithmetics and NTT (finite field DFT) optimizations
For a naive implementation, it's clearly the exponent with the largest Hamming weight (number of set bits). In this case (2^k - 1) would require the most multiplication steps: (k).
For k-ary window methods, the number of multiplications can be made independent of the exponent. e.g., for a fixed window size: w = 3 we could compute {m^0, m^1, m^2, m^3, .., m^7} group coefficients (all mod n in this case, and probably in Montgomery representation for efficient reduction). The result is ceil(k/w) multiplications. This is often preferred in cryptographic implementations, as the exponent is not revealed by simple timing attacks. Any k-bit exponent has the same timing. (The reality is a bit more complex if it is assumed the attacker has 'fine-grained' access to things like cache performance, etc.)
Sliding window techniques are typically more efficient, and only slightly more difficult to implement than fixed-window methods. however, they also leak side channel data, as timing will be dependent on the exponent. Furthermore, the 'best' sequence to use is known to be a hard problem.
Can anyone explain how the hashing trick is conducted in VW? Specifically, the description below, from the gist:
the default is hashing / projecting feature names to the machine
architecture unsigned word using a variant of the murmurhash v3
(32-bit only) algorithm which then is ANDed with (2^k)-1 (ie it is
projected down to the first k lower order bits with the rest 0'd out).
Mentions the result of the hash being 'ANDed' with (2^k)-1. What does this mean? I understand if a hash is mod some number D (hash('my string')%D), it results in a new number that can only take on D values. Is this the same as AND'ed? If so, how exactly does it work?
(2^k)-1 in binary is "k ones", e.g. (2^6)-1 = 111111(in binary). When you apply logical AND on the original hash number and (2^k)-1, you effectively take only the k lower-order bits of the hash. It is the same operation as mod 2^k.
Suppose I have a long and irregular digital signal made up of smaller but irregular signals occurring at various times (and overlapping each other). We will call these shorter signals the "pieces" that make up the larger signal. By "irregular" I mean that it is not a specific frequency or pattern.
Given the long signal I need to find the optimal arrangement of pieces that produce (as closely as possible) the larger signal. I know what the pieces look like but I don't know how many of them exist in the full signal (or how many times any one piece exists in the full signal). What software algorithm would you use to do this optimization? What do I search for on the web to get help on solving this problem?
Here's a stab at it.
This is actually the easier of the deconvolution problems. It is easier in that you may be able to have a unique answer. The harder problem is that you also don't know what the pieces look like. That case is called blind deconvolution. It is a harder problem and is usually iterative and statistical (ML or MAP), and the solution may not be right.
Luckily, your case is easier, but still not so easy because you have multiple pieces :p
I think that it may be commonly called mixture deconvolution?
So let f[t] for t=1,...N be your long signal. Let h1[t]...hn[t] for t=0,1,2,...M be your short signals. Obviously here, N>>M.
So your hypothesis is that:
(1) f[t] = h1[t+a1[1]]+h1[t+a1[2]] + ...
+h2[t+a2[1]]+h2[t+a2[2]] + ...
+....
+hn[t+an[1]]+h2[t+an[2]] + ...
Observe that each row of that equation is actually hj * uj where uj is the sum of shifted Kronecker delta. The * here is convolution.
So now what?
Let Hj be the (maybe transposed depending on how you look at it) Toeplitz matrix generated by hj, then the equation above becomes:
(2) F = H1 U1 + H2 U2 + ... Hn Un
subject to the constraint that uj[k] must be either 0 or 1.
where F is the vector [f[0],...F[N]] and Uj is the vector [uj[0],...uj[N]].
So you can rewrite this as:
(3) F = H * U
where H = [H1 ... Hn] (horizontal concatenation) and U = [U1; ... ;Un] (vertical concatenation).
H is an Nx(nN) matrix. U is an nN vector.
Ok, so the solution space is finite. It is 2^(nN) in size. So you can try all possible combinations to see which one gives you the lowest ||F - H*U||, but that will take too long.
What you can do is solve equation (3) using pseudo-inverse, multi-linear regression (which uses least square, which comes out to pseudo-inverse), or something like this
Is it possible to solve a non-square under/over constrained matrix using Accelerate/LAPACK?
Then move that solution around within the null space of H to get a solution subject to the constraint that uj[k] must be either 0 or 1.
Alternatively, you can use something like Nelder-Mead or Levenberg-Marquardt to find the minimum of:
||F - H U|| + lambda g(U)
where g is a regularization function defined as:
g(U) = ||U - U*||
where U*[j] = 0 if |U[j]|<|U[j]-1|, else 1
Ok, so I have no idea if this will converge. If not, you have to come up with your own regularizer. It's kinda dumb to use a generalized nonlinear optimizer when you have a set of linear equations.
In reality, you're going to have noise and what not, so it actually may not be a bad idea to use something like MAP and apply the small pieces as prior.
I have lots of large (around 5000 x 5000) matrices that I need to invert in Matlab. I actually need the inverse, so I can't use mldivide instead, which is a lot faster for solving Ax=b for just one b.
My matrices are coming from a problem that means they have some nice properties. First off, their determinant is 1 so they're definitely invertible. They aren't diagonalizable, though, or I would try to diagonlize them, invert them, and then put them back. Their entries are all real numbers (actually rational).
I'm using Matlab for getting these matrices and for this stuff I need to do with their inverses, so I would prefer a way to speed Matlab up. But if there is another language I can use that'll be faster, then please let me know. I don't know a lot of other languages (a little but of C and a little but of Java), so if it's really complicated in some other language, then I might not be able to use it. Please go ahead and suggest it, though, in case.
I actually need the inverse, so I can't use mldivide instead,...
That's not true, because you can still use mldivide to get the inverse. Note that A-1 = A-1 * I. In MATLAB, this is equivalent to
invA = A\speye(size(A));
On my machine, this takes about 10.5 seconds for a 5000x5000 matrix. Note that MATLAB does have an inv function to compute the inverse of a matrix. Although this will take about the same amount of time, it is less efficient in terms of numerical accuracy (more info in the link).
First off, their determinant is 1 so they're definitely invertible
Rather than det(A)=1, it is the condition number of your matrix that dictates how accurate or stable the inverse will be. Note that det(A)=∏i=1:n λi. So just setting λ1=M, λn=1/M and λi≠1,n=1 will give you det(A)=1. However, as M → ∞, cond(A) = M2 → ∞ and λn → 0, meaning your matrix is approaching singularity and there will be large numerical errors in computing the inverse.
My matrices are coming from a problem that means they have some nice properties.
Of course, there are other more efficient algorithms that can be employed if your matrix is sparse or has other favorable properties. But without any additional info on your specific problem, there is nothing more that can be said.
I would prefer a way to speed Matlab up
MATLAB uses Gauss elimination to compute the inverse of a general matrix (full rank, non-sparse, without any special properties) using mldivide and this is Θ(n3), where n is the size of the matrix. So, in your case, n=5000 and there are 1.25 x 1011 floating point operations. So on a reasonable machine with about 10 Gflops of computational power, you're going to require at least 12.5 seconds to compute the inverse and there is no way out of this, unless you exploit the "special properties" (if they're exploitable)
Inverting an arbitrary 5000 x 5000 matrix is not computationally easy no matter what language you are using. I would recommend looking into approximations. If your matrices are low rank, you might want to try a low-rank approximation M = USV'
Here are some more ideas from math-overflow:
https://mathoverflow.net/search?q=matrix+inversion+approximation
First suppose the eigen values are all 1. Let A be the Jordan canonical form of your matrix. Then you can compute A^{-1} using only matrix multiplication and addition by
A^{-1} = I + (I-A) + (I-A)^2 + ... + (I-A)^k
where k < dim(A). Why does this work? Because generating functions are awesome. Recall the expansion
(1-x)^{-1} = 1/(1-x) = 1 + x + x^2 + ...
This means that we can invert (1-x) using an infinite sum. You want to invert a matrix A, so you want to take
A = I - X
Solving for X gives X = I-A. Therefore by substitution, we have
A^{-1} = (I - (I-A))^{-1} = 1 + (I-A) + (I-A)^2 + ...
Here I've just used the identity matrix I in place of the number 1. Now we have the problem of convergence to deal with, but this isn't actually a problem. By the assumption that A is in Jordan form and has all eigen values equal to 1, we know that A is upper triangular with all 1s on the diagonal. Therefore I-A is upper triangular with all 0s on the diagonal. Therefore all eigen values of I-A are 0, so its characteristic polynomial is x^dim(A) and its minimal polynomial is x^{k+1} for some k < dim(A). Since a matrix satisfies its minimal (and characteristic) polynomial, this means that (I-A)^{k+1} = 0. Therefore the above series is finite, with the largest nonzero term being (I-A)^k. So it converges.
Now, for the general case, put your matrix into Jordan form, so that you have a block triangular matrix, e.g.:
A 0 0
0 B 0
0 0 C
Where each block has a single value along the diagonal. If that value is a for A, then use the above trick to invert 1/a * A, and then multiply the a back through. Since the full matrix is block triangular the inverse will be
A^{-1} 0 0
0 B^{-1} 0
0 0 C^{-1}
There is nothing special about having three blocks, so this works no matter how many you have.
Note that this trick works whenever you have a matrix in Jordan form. The computation of the inverse in this case will be very fast in Matlab because it only involves matrix multiplication, and you can even use tricks to speed that up since you only need powers of a single matrix. This may not help you, though, if it's really costly to get the matrix into Jordan form.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've read about it on a message board - Random class isn't really random. It is created with predictable fashion using a mathematical formula.
Is it really true? If so, Random isn't really random??
Because deterministic computers are really bad at generating "true" random numbers by themselves.
Also, a predictable/repeatable random sequence is often surprisingly useful, since it helps in testing.
It's really hard to create something that is absolutely random. See the Wikipedia articles on randomness and pseudo-randomness
As others have already said, Random creates pseudo-random numbers, depending on some seed value. It may be helpful to know that the .NET class Random has two constructors:
Random(int Seed)
creates a random number generator with a given seed value, helpful if you want reproducible behaviour of your program. On the other hand,
Random()
creates a random number generator with date-time depending seed value, which means, almost every time you start your program again, it will produce a different sequence of (pseudo-)random numbers.
The sequence is predictable for each starting seed. For different seeds, different sequences of numbers are returned. If the seed used is itself random (such as the DatetTime.Now.Ticks), then the numbers returned a adequately 'random'.
Alternatively, you can use a cryptographic random number generator such as the RNGCryptoServiceProvider class.
It isn't random it's a random-like number generating algorithm and it's based on a number to generate. If you set that random number to something like the system time the numbers are more close to random, but if you use these numbers to lets say, an encryption algorithm, is the attacker knows WHEN you generate the random numbers and the algorithm you use, then it is more possible that your encryption will break.
The only way to generate true random numbers is to measure something natural, for example voltage levels or have a microphone picking up sounds somewhere or something like that.
It is true, but you can always seed the random number generator with some time dependent value, or if you're really prepared to push the boat out, look at www.random.org...
In the case of the Random class though, I think it should be random enough for most requirements... I can't see a method to actually seed it, so I'm guessing it must automatically seed as built in behaviour...
Correct. Class Random is not absolutely totally random. The important question is, is it as statistically close to being random as you need it to be. The output from class Random is statistically as nearly random as a reasonable deterministic program can be. The algorithm uses a 48-bit seed modified by a linear congruential formula. If the Random object is created using the parameterless constructor, the 48 low-order bits of milli-second time get used as the seed. If the Random object is created using the seed parameter (a long), the 48 low-order bits of the long get used as the seed.
If Random is instanced with the same seed and make the exact same sequence of next calls are made from it, the exact same sequence of values will result from that instance. This is deliberate to allow for predictable software testing and dmonstrations. Ordinarliy, Random is not used with a constant seed for operational use since it is usually used to get unpredictable psuedo-random sequences. If two instances of Random with the parameterless constructors get created in the same clock millisecond, they will also get the same sequences from both instances. It is important to note that eventually, a Random instance will repeat its pattern. Therefore, a Random instance should not be used for enormously long sequences before creating a new instance.
There is no reason not to use the Random class except for high-security cryptographic applications or some special need where some aspect of true randomness is of paramount importance, something that is uncommon. In those cases, you really need a hardware randomizer that uses radioactive decay or infinitesimal molecular level brownian motion induced randomness to generate a random result. Sun SPARC hardware platforms had such hardware installable. Other platforms can have them too, along with the hardware drivers that give access to the randomness they generate.
The algorithm used in class Random is the result of considerable research by some of the best minds in computer science and mathematics. Given the right parameters, it provides remarkable and outstanding results. Other more recent algorithms may be better for some limited applications, but they also have performance or specific application issues that make them less suitible for general purpose use. The linear congruential algorithm still remains one of the most widely used general purpose pseudo-random number generators.
The following quote is from Donald Knuth's book, The Art of Computer Programming, Volume 2, Semi-numerical Algorithms, Section 3.2.1. The quote describes the linear congruential method and discusses its properties. If you don't know who Donald Knuth is or have never read any of his papers or books, he, amongst other things, showed that there can be no sort faster than Tony Hoare's Quicksort with partion pivot strategies created by Robert Sedgewick. Robert Sedgewick, who suggested the best simple pivot selection strategies for Quicksort, did his doctoral thesis on Quicksort under Donald Knuth's supervision. Knuth's multi-volume work, The Art Of Computer Programming, is one of the greatest expositions of the most important theoretical aspects of computing ever assembled, including sorting, searching and randomizing algorithms. There is a lot of discussion in Chapter 3 of this about what randomness really is, statistically and philosophically, and about software that emmulates true randomness to the point where it is statistically nearly indistinguishable from it for very large, but still finite, sequences. What follows is pretty heavy reading:
3.2.1. The Linear Congruential Method
By far the most popular random number generators in use today are special cases of the following
scheme, introduced by D. H. Lehmer in 1949. [See Proc. 2nd Symp. on
Large-Scale Digital Calculating Machinery (Cambridge, Mass.: Harvard
University Press, 1951), 141-146.]
We choose four magic integers:
m, the modulus; 0 < m.
a, the multiplier; 0 <= a < m.
c, the increment; 0 <= c < m.
X[0], the starting value; 0 <= X[0] < m. (equation 1)
The desired sequence of random numbers (X[n] ) is then obtained by setting
X[n+1] = (a * X[n] + c) mod m, n >= O. (equation 2)
This is called a linear congruential sequence. Taking the remainder mod m is somewhat
like determining where a ball will land in a spinning roulette wheel.
For example, the sequence obtained when m == 10 and X[0] == a == c == 7 is
7, 6, 9, 0, 7, 6, 9, 0, ... . (example 3)
As this example shows, the
sequence is not always "random" for all choices of m, a, c, and X[0];
the principles of choosing the magic numbers appropriately will be
investigated carefully in later parts of this chapter.
Example (3)
illustrates the fact that the congruential sequences always get into
a loop: There is ultimately a cycle of numbers that is repeated
endlessly. This property is common to all sequences having the
general form X[n+1] = f(X[n]), when f transforms a finite set into
itself; see exercise 3.1-6. The repeating cycle is called the period;
sequence (3) has a period of length 4. A useful sequence will of
course have a relatively long period.
The special case c == 0 deserves
explicit mention, since the number generation process is a little
faster when c == 0 than it is when c != O. We shall see later that the
restriction c == 0 cuts down the length of the period of the sequence,
but it is still possible to make the period reasonably long. Lehmer's
original
generation method had c == 0, although he mentioned c != 0 as a possibility; the fact that c
!= 0 can lead to longer periods is due to Thomson [Compo J. 1 (1958), 83, 86] and, independently, > to Rotenberg [JACM 7 (1960), 75-77]. The
terms multiplicative congruential method and mixed congruential
method are used by many authors to denote linear congruential
sequences with c == 0 and c != 0, respectively.
The letters m, a, c,
and X[0] will be used throughout this chapter in the sense described
above. Furthermore, we will find it useful to define
b = a - 1, (equation 4)
in order to simplify many of our formulas.
We can immediately reject the case a == 1, for this would mean that X[n] = (X[0]
+ n * c) mod m, and the sequence would certainly not behave as a random sequence. The case a == 0 is even worse. Hence for practical purposes
we may assume that
a >= 2, b >= 1. (equation 5)
Now we can prove a generalization of Eq. (2),
X[n+k] = (a^k * X[n] + (a^k - 1) * c / b) mod m, k >= 0, n >= 0,
(equation 6)
which expresses the (n+k)th term directly in terms of the nth
term. (The special case n == 0 in this equation is worthy of note.) It
follows that the subsequence consisting of every kth term of (X[n])
is another linear congruential sequence, having the multiplier a k
mod m and the increment ((a^k - 1) * c / b) mod m.
An important corollary
of (6) is that the general sequence defined by m, a, c, and X[0] can be
expressed very simply in terms of the special case where c == 1 and X[0]
== O. Let
Y[0] = 0, Y[n+1] = (a * Y[n+1]) mod m. (equation 7)
According to Eq. (6) we will have Y[k] === (a^k - 1) / b(modulo m), hence the general
sequence defined in (2) satisfies
X[n] = (A * Y[n] + X[0]) mod m, where A == (X[0] * b + c) mod m. (equation 8)