image encryption using henon equation - image

i want to encrypt pixel value using henon equation :
Xi+2 = 1 - a*(Xi+1)*(Xi+1) + bXi (sorry i can't post image)
where a=1.4, b=0.3, x0=0.01, x1=0.02,
with this code :
k[i+2] =1-a*(Math.pow(k[i+1], 2))+b*k[i]
i can get random value from henon equation
1.00244,
-0.40084033504000005,
1.0757898361270288,
-0.7405053806319072,
0.5550494445953806,
0.3465365454865311,
0.99839222507778,
-0.2915408854881054,
1.1805231444476698,
-1.038551118053691,
-0.15586685140049938,
0.6544223990721852,
. after that i rounded the random value
with this code :
inter[i]= (int) Math.round((k[i]*65536)%256)
i can encrypt the pixel value by XOR with random value (henon).
my question :
there are some negative random value from henon, as we know that there aren't negative pixel value.
so may i skip the negative random value (only save positive random
value) to encrypt original pixel value ?
Thanks

You are using the Hénon sequence as a source for pseudo-random numbers, right?
Then you can of course chose to discard negative numbers (or take the absolute value, or do some other fancy thing) - as long as you do the same in encryption and decryption. If there is a specification, it should better be explicit about this.
Maybe you are using Javascript or some other language where % is not modulus, but remainder. If so, see this answer
Three other things to note:
Double-check that you are claculating the right thing. It seems to me that your calculation should read k[i+1] =1-a*(Math.pow(k[i], 2))+b*k[i], since the Hénon sequence only uses the last value.
`
Do you really need to store past values of k? If not, then just use
k =1-a*(Math.pow(k, 2))+b*k
or even better
k = 1 + k * (b - a *k)
(Spoiler warning: This may be the didactical point of an exercise.) The Hénon sequence is chaotic, and floating point errors will sooner or later influence the random numbers. So your random number generator maybe isn't as deterministic as you think.

Related

EDIT: ES6 : rolling and testing my own PRNG hash hex key generator?

[EDIT: post was originnaly too long... trying to shorten it !]
for building some VNodes based app, I need a function that can generate pseudo random numbers, used as unique IDs, with a 3 or 4 bytes sizes.
Think a random RGB or RBBA, CSS-like string colors like : 'bada55' or '900dcafe'... User could seed the generator by picking a random pixel in a PNG image, for example.
Due to functionnal aspects of my app, generator must a pure function avpoding side effects like : using Math.random...
I'm not aware of artithmetic theories (my courses are so far in past...) but I decided to roll a custom PRNG, using Multiply-With-Carry (MWC) paradigm, and test it empirically with some prime coeffs, and random colors seeds.
My idea is to test it with 1-byte, 2-bytes, 3-bytes, and then 4-bytes outputs : my feelings are :
identifiying "good" primes and potential 'bad' seeds when number of bytes is lower
and try to test it against a cresing number of bytes
[EDITED FROM HERE]
MCW usually works as follow:
# each turn, compute the i-th byte:
Xi[i] = A * Zi[i] + Ci[i]
Ci[i] = Math.floor((A * Zi[i] + Ci[i]) / M)
Zi[i] = Xi[i] % M
where A is the multiplier, C the increment and M the modulus. For bytes the modulus is 256.
It's possible to determine mathematically A and C (prime numbers) to get a full cycle generator.
The trouble is that, when a byte cell starts to loop to its seed value, ALL cells do that... so the period is 256 for 4 bytes !
I need to build something like an odometer to shift values of othrt cells and garantuee a priod of Math.pow(256, 4).
How to achieve that in a simple way, if possible ?

Should I provide consistency checks in the Huffman tree building algorithm for DEFLATE?

In RFC-1951 there is a simple algorithm that restores the Huffman tree from a list of code lengths, described following way:
1) Count the number of codes for each code length. Let
bl_count[N] be the number of codes of length N, N >= 1.
2) Find the numerical value of the smallest code for each
code length:
code = 0;
bl_count[0] = 0;
for (bits = 1; bits <= MAX_BITS; bits++) {
code = (code + bl_count[bits-1]) << 1;
next_code[bits] = code;
}
3) Assign numerical values to all codes, using consecutive
values for all codes of the same length with the base
values determined at step 2. Codes that are never used
(which have a bit length of zero) must not be assigned a
value.
for (n = 0; n <= max_code; n++) {
len = tree[n].Len;
if (len != 0) {
tree[n].Code = next_code[len];
next_code[len]++;
}
But there is no any data consistency checks in the algorithm. On the other hand is it obvious that the lengths list can be invalid. The length values, because of encoding in 4 bits can not be invalid, but, for example, there can be more codes than can be encoded for some code length.
What is the minimal set of checks that will provide data validation? Or such checks are not needed for some reason that I missed?
zlib checks that the list of code lengths is both complete, i.e. that it uses up all bit patterns, and that it does not overflow the bit patterns. The one allowed exception is when there is a single symbol with length 1, in which case the code is allowed to be incomplete (The bit 0 means that symbol, a 1 bit is undefined).
This helps zlib reject random, corrupted, or improperly coded data with higher probability and earlier in the stream. This is a different sort of robustness than what was suggested in another answer here, where you could alternatively permit incomplete codes and only return an error when an undefined code is encountered in the compressed data.
To calculate completeness, you start with the number of bits in the code k=1, and the number of possible codes n=2. There are two possible one-bit codes. You subtract from n the number of length 1 codes, n -= a[k]. Then you increment k to look at two-bit codes, and you double n. Subtract the number of two-bit codes. When you're done, n should be zero. If at any point n goes negative, you can stop right there as you have an invalid set of code lengths. If when you're done n is greater than zero, then you have an incomplete code.
I think that checking that next_code[len] does not overflow past its respective bits is enough. So after tree[n].Code = next_code[len];, you can do the following check:
if (tree[n].Code & ((1<<len)-1) == 0)
print(Error)
If tree[n].Code & ((1<<len)-1) reaches 0, it means that there are more codes of length len than they should, so the lengths list had an error in it.
On the other hand, if every symbol of the tree is assigned a valid (unique) code, then you have created a correct Huffman tree.
EDIT: It just dawned on me: You can simply make the same check at the end of step one: You just have to check that bl_count[N] <= 2^N - SUM((2^j)*bl_count[N-j]) for all 1<=j<=N and for all N >=1 (If a binary tree has bl_count[N-1] leaves in level N-1, then it cannot have more than 2^N - 2*bl_count[N-1] leaves in level N, level 0 being the root).
This guarantees that the code you create is a prefix code, but it does not guarantee that it is the same as the original creator intended. If for example the lengths list is invalid in a way that you can still create a valid prefix code, you cannot prove that this is the Huffman code, simply because you do not know the frequency of occurence for each symbol.
You need to make sure that there is no input that will cause your code to execute illegal or undefined behavior, such as indexing off the end of an array, because such illegal inputs might be used to attack your code.
In my opinion, you should attempt to handle illegal but not dangerous inputs as gracefully as possible, so as to inter-operate with programs written by others which may interpret the specification in a different way than you have, or which have made small errors which have only one plausible interpretation. This is the Robustness principle - you can find discussions of this starting at http://en.wikipedia.org/wiki/Robustness_principle.

"interval is empty", Lua math.random isn't working for large numbers?

I didn't know if this is a bug in Lua itself or if I was doing something wrong. I couldn't find anything about it anywhere. I am using Lua for Windows (Lua 5.1.4):
>return math.random(0, 1000000000)
1251258
This returns a random integer between 0 and 10000000000, as expected. This seems to work for all other values. But if I add a single 0:
>return math.random(0, 10000000000)
stdin:1: bad argument #2 to 'random' (interval is empty)
Any number higher than that does the same thing.
I tried to figure out exactly how high a number has to be to cause this and found something even weirder:
>return math.random(0, 2147483647)
-75617745
If the value is 2147483647 then it gives me negative numbers. Any higher than that and it throws an error. Any lower than that and it works fine.
That's 0b1111111111111111111111111111111 in binary, 31 binary digits exactly. I am not sure what that means though.
This unexpected behavior (bug?) is due to how math.random treats the input arguments passed in Lua 5.1. From lmathlib.c:
case 2: { /* lower and upper limits */
int l = luaL_checkint(L, 1);
int u = luaL_checkint(L, 2);
luaL_argcheck(L, l<=u, 2, "interval is empty");
lua_pushnumber(L, floor(r*(u-l+1))+l); /* int between `l' and `u' */
break;
}
As you may know in C, a standard int can represent values -2,147,483,648 to 2,147,483,647. Adding +1 to 2,147,483,647, like in your use-case, will overflow and wrap around the value giving -2,147,483,648. The end result is negative since you're multiplying a positive with a negative number.
Furthermore, anything above 2,147,483,647 will fail the luaL_argcheck due to overflow wraparound.
There are a few ways to address this problem:
Upgrade to Lua 5.2. That one has since fixed this issue by treating the input arguments as lua_Number instead.
Switch to LuaJIT which does not have this integer overflow issue.
Patch the Lua 5.1 source yourself with the fix and recompile.
Modify your random range so it does not overflow.
If you need a range that is larger than what the random function supports (32 bit signed integers or 2^31 due to sign bit, because math.random is at C level), but smaller than the range of Lua "number" type (based on What is the maximum value of a number in Lua?, 2^52, or maybe even 2^53), you could try generating two random numbers: scale the first to the range desired; add the second to "fill the gap". For example, say you want a range of 0 to 2^36. The largest from math.random is 2^31. So you could do:
-- 2^36 = 2^31 * 2^5 so
scale = 2^5
baseRand = scale * math.random(0, 2^31)
-- baseRand is now between 0 and 2^36 but there are gaps of 2^5 in the set
-- of possible values; fill the gaps with second random number:
fillGap = math.random(0, 2^5)
randNum = baseRand + fillGap
This will work as long as the desired range is less than the Lua interpreter's maximum for Lua numbers, which is a configurable compile time parameter but if you use stock build it is 2^52, a very large number (although not as large as largest long integer, 2^63).
Note also that largest positive N-bit integer is 2^N-1 (not 2^N), but the above technique can be applied to any range, you could have for instance scale = 10^6 then randNum = 10^6 * math.random(0, 10^8) + math.random(0, 10^6).

Bijective "Integer <-> String" function

Here's a problem I'm trying to create the best solution for. I have a finite set of non-negative integers in the range of [0...N]. I need to be able to represent each number in this set as a string and be able to convert such string backwards to original number. So this should be a bijective function.
Additional requirements are:
String representation of a number should obfuscate original number at least to some degree. So primitive solution like f(x) = x.toString() will not work.
String length is important: the less the better.
If one knows the string representation of K, I would like it to be non-trivial (to some degree) to guess the string representation of K+1.
For p.1 & p.2 the obvious solution is to use something like Base64 (or whatever BaseXXX to fit all the values) notation. But can we fit into p.3 with minimal additional effort? Common sense tells me that I additionally need a bijective "String <-> String" function for BaseXXX values. Any suggestions?
Or maybe there's something better than BaseXXX to use to fit all 3 requirements?
If you do not need this to be too secure, you can just use a simple symmetric cipher after encoding in BaseXXX. For example you can choose a key sequence of integers [n₁, n₂, n₃...] and then use a Vigenere cipher.
The basic idea behind the cipher is simple--encode each character C as C + K (mod 26) where K is an element from the key. As you go along, just get the next number from the key for the next character, wrapping around once you run out of values in the key.
You really have two options here: you can first convert a number to a string in baseXXX and then encrypt, or you can use the same idea to just encrypt each number as a single character. In that case, you would want to change it from mod 26 to mod N + 1.
Come to think of it, an even simpler option would be to just xor the element from the key and the value. (As opposed to using the Vigenere formula.) I think this would work just as well for obfuscation.
This method meets requirements 1-3, but it is perhaps a bit too computationally expensive:
find a prime p > N+2, not too much larger
find a primitive root g modulo p, that is, a number whose multiplicative order modulo p is p-1
for 0 <= k <= N, let enc(k) = min {j > 0 : g^j == (k+2) (mod p)}
f(k) = enc(k).toString()
Construct a table of length M. This table should map the numbers 0 through M-1 to distinct short strings with a random ordering. Express the integer as a base-M number, using the strings from the table to represent the digits in the number. Decode with a straightforward reversal.
With M=26, you could just use a letter for each of the digits. Or take M=256 and use a byte for each digit.
Not even remotely a good cryptographic approach!
So you need a string that obfuscates the original number, but allows one to determine str(K+1) when str(K) is known?
How about just doing f(x) = (x + a).toString(), where a is secret? Then an outside user can't determine x from f(x), but they can be confident that if they have a string "1234", say, for an unknown x then "1235" maps to x+1.
p. 1 and p. 3 are slightly contradicting and a bit vague, too.
I would propose using hex representation of the integer numbers.
17 => 0x11
123123 => 1E0F3

linear interpolation on 8bit microcontroller

I need to do a linear interpolation over time between two values on an 8 bit PIC microcontroller (Specifically 16F627A but that shouldn't matter) using PIC assembly language. Although I'm looking for an algorithm here as much as actual code.
I need to take an 8 bit starting value, an 8 bit ending value and a position between the two (Currently represented as an 8 bit number 0-255 where 0 means the output should be the starting value and 255 means it should be the final value but that can change if there is a better way to represent this) and calculate the interpolated value.
Now PIC doesn't have a divide instruction so I could code up a general purpose divide routine and effectivly calculate (B-A)/(x/255)+A at each step but I feel there is probably a much better way to do this on a microcontroller than the way I'd do it on a PC in c++
Has anyone got any suggestions for implementing this efficiently on this hardware?
The value you are looking for is (A*(255-x)+B*x)/255. It requires only 8x8 multiplication, and a final division by 255, which can be approximated by simply taking the high byte of the sum.
Choosing x in range 0..128, no approximation is needed: take the high byte of (A*(128-x)+B*x)<<1.
Assuming you interpolate a sequence of values where the previous endpoint is the new start point:
(B-A)/(x/255)+A
sounds like a bad idea. If you use base 255 as a fixedpoint representation, you get the same interpolant twice. You get B when x=255 and B as the new A when x=0.
Use 256 as the fixedpoint system. Divides become shifts, but you need 16-bit arithmetic and 8x8 multiplication with a 16-bit result. The previous issue can be fixed by simply ignoring any bits in the higher-bytes as x mod 256 becomes 0. This suggestion uses 16-bit multiplication, but can't overflow. and you don't interpolate over the same x twice.
interp = (a*(256 - x) + b*x) >> 8
256 - x becomes just a subtract-with-borrow, as you get 0 - x.
The PIC lacks these operations in its instruction set:
Right and left shift. (both logical and arithmetic)
Any form of multiplication.
You can get right-shifting by using rotate-right instead, followed by masking out the extra bits on the left with bitwise-and. A straight-forward way to do 8x8 multiplication with 16-bit result:
void mul16(
unsigned char* hi, /* in: operand1, out: the most significant byte */
unsigned char* lo /* in: operand2, out: the least significant byte */
)
{
unsigned char a,b;
/* loop over the smallest value */
a = (*hi <= *lo) ? *hi : *lo;
b = (*hi <= *lo) ? *lo : *hi;
*hi = *lo = 0;
while(a){
*lo+=b;
if(*lo < b) /* unsigned overflow. Use the carry flag instead.*/
*hi++;
--a;
}
}
The techniques described by Eric Bainville and Mads Elvheim will work fine; each one uses two multiplies per interpolation.
Scott Dattalo and Tony Kubek have put together a super-optimized PIC-specific interpolation technique called "twist" that is slightly faster than two multiplies per interpolation.
Is using this difficult-to-understand technique worth running a little faster?
You could do it using 8.8 fixed-point arithmetic. Then a number from range 0..255 would be interpreted as 0.0 ... 0.996 and you would be able to multiply and normalize it.
Tell me if you need any more details or if it's enough for you to start.
You could characterize this instead as:
(B-A)*(256/(x+1))+A
using a value range of x=0..255, precompute the values of 256/(x+1) as a fixed-point number in a table, and then code a general purpose multiply, adjust for the position of the binary point. This might not be small spacewise; I'd expect you to need a 256 entry table of 16 bit values and the multiply code. (If you don't need speed, this would suggest your divison method is fine.). But it only takes one multiply and an add.
My guess is that you don't need every possible value of X. If there are only a few values of X, you can compute them offline, do a case-select on the specific value of X and then implement the multiply in terms of a fixed sequence of shifts and adds for the specific value of X. That's likely to be pretty efficient in code and very fast for a PIC.
Interpolation
Given two values X & Y , its basically:
(X+Y)/2
or
X/2 + Y/2 (to prevent the odd-case that A+B might overflow the size of the register)
Hence try the following:
(Pseudo-code)
Initially A=MAX, B=MIN
Loop {
Right-Shift A by 1-bit.
Right-Shift B by 1-bit.
C = ADD the two results.
Check MSB of 8-bit interpolation value
if MSB=0, then B=C
if MSB=1, then A=C
Left-Shift 8-bit interpolation value
}Repeat until 8-bit interpolation value becomes zero.
The actual code is just as easy. Only i do not remember the registers and instructions off-hand.

Resources