Hill Cipher using a 2 x 2 Key Matrix - matrix

I'm new to cryptography and I cannot seem to get my head around this problem:
The problem says that the Hill Cipher using the below 2 x 2 key matrix (K) was used to produce the ciphered text "KCFL".
K = (3 5)
(2 3)
It then asks to use the Hill Cipher to show the calculations and the plain text when I decipher the same encrypted message "KCFL".
I know with other matrices, e.g. for the determinant there is usually a formula, such as:
a x d - b x c
However, for the Hill Cipher I am completely lost.
I have done the following:
a) found the inverse of K:
K inverse = (-3 5)
(2 -3)
b) Found "KFCL":
KFCL = (10 5)
(2 11)
c) The next step (mod 26) confuses me. How do I decipher (using mod 26) and the Cipher Key to find the plain text?
Any help is greatly appreciated.
Many thanks.

To perform MOD26 of the matrix, take each number and MOD26. If the number is negative, add multiples of 26 until you hit a positive number.
This may also help you.
26 modulo in hill cipher encryption

Related

a + b mod n vs. a xor b mod n

I'm procedurally generating a World in a Game and therefore use 2 equivalent pseudo-random generators (linear congruential) for x and y axis (giving both a different seed of course).
Now to be able to create different worlds I want to combine both pseudo-random values (e.g. both between 0 and 10) for each position (x,y).
I first thought of hashing (to generally equally hit each number from 0 to 10), but then found that XORing both ints might be more performant (o.c. you might call that hashing too)
Now I still want to generate values between 0 and 10. So I would do:
(1) (r1 ^ r2) % 11.
( ^ beeing bitwise XOR)
And here I was wondering if that is equivalent to
(2) (r1 + r2) % 11
And which of both would be more performant. I'd say (1) because it does not carry bits?

generate 9 byte alphunumeric from a seed of 10 digits number

I have a unique 10 digits phone number, I want to generate a 9 character unique alphanumeric id from it. It doesn't need to be reversible, but the same unique alphanumeric id should be generated from the same phone number.
Here is one possibility. It gives a unique 9-character alphanumeric identifier to all numbers in the range 0 to 9999999999 in such a way that the inverse is not easily computable (with only 10 billion possible numbers genuine security is impossible, but it is easy enough to make it difficult for casual users). It is based on modular exponentiation using a primitive root mod p, where p is a prime chosen to be larger than 10^10:
1) First add 1 to the number to make sure that it isn't 0
2) Then raise the primitive root to this number, mod p. This is easy to do
with modular exponentiation by squaring
3) Write the result in hex
4) Pad by 'X' if the result has fewer than 9 digits.
Here is a Python implementation:
p = 10000000259 #prime
a = 17 #primitive root mod p
#assumes num is an integer in range 0 to 9999999999:
def unique_id(num):
num += 1 #so num is in range 1 to p-1
num = pow(a,num,p)
h = hex(num)[2:]
return (h + 'x'*(9 - len(h))).upper()
For example:
>>> unique_id(12024561111) #White House phone number
'1614351BX'
A non-brute force attack would need to solve the base-17 discrete log problem (mod 10000000259). This isn't particularly hard but is non-trivial and is probably adequate to dissuade casual attempts to recover the original number. You could replace p by another prime (and a by a corresponding primitive root), as long as p > 10^10 and the hex-representation of p-1 is 9 hex digits or less in length. If the conversion from numbers to identifiers is kept server-side then a casual attacker wouldn't have access to a and p, which would add a layer of "security through obscurity" (dubious security, but better than nothing).

Looking For Data Structure To Add Numbers Consecutively

Like the title says I'm trying to add numbers consecutively. Here's an example:
I'm preatty sure there's a data structure for this but don't remember what it's called. Any help is greatly appreciated. Thanks.
I think this is more of a math question than a data structures question. :-)
The sum of the numbers 1 + 2 + ... + n is equal to n(n + 1) / 2. This number is called the nth triangular number.
Hope this helps!
Data-Structure for this simple summing problem is a overkill. If the number is consecutive, then n(n + 1) / 2 is the best formula for drive this. Even if the consecutive sequence start from any random number like [8 9 10 11 12 13], then you can still calculate it by ((13 * (13 + 1)) / 2) - ((7 * (7 + 1)) / 2).
Further if you need data-structure, then you can use Segment Tree to calculate Range Sum uery. (It is best suit when the data is not consecutive)

convert real number to radicals

Suppose I have a real number. I want to approximate it with something of the form a+sqrt(b) for integers a and b. But I don't know the values of a and b. Of course I would prefer to get a good approximation with small values of a and b. Let's leave it undefined for now what is meant by "good" and "small". Any sensible definitions of those terms will do.
Is there a sane way to find them? Something like the continued fraction algorithm for finding fractional approximations of decimals. For more on the fractions problem, see here.
EDIT: To clarify, it is an arbitrary real number. All I have are a bunch of its digits. So depending on how good of an approximation we want, a and b might or might not exist. Brute force is naturally not a particularly good algorithm. The best I can think of would be to start adding integers to my real, squaring the result, and seeing if I come close to an integer. Pretty much brute force, and not a particularly good algorithm. But if nothing better exists, that would itself be interesting to know.
EDIT: Obviously b has to be zero or positive. But a could be any integer.
No need for continued fractions; just calculate the square-root of all "small" values of b (up to whatever value you feel is still "small" enough), remove everything before the decimal point, and sort/store them all (along with the b that generated it).
Then when you need to approximate a real number, find the radical whose decimal-portion is closet to the real number's decimal-portion. This gives you b - choosing the correct a is then a simple matter of subtraction.
This is actually more of a math problem than a computer problem, but to answer the question I think you are right that you can use continued fractions. What you do is first represent the target number as a continued fraction. For example, if you want to approximate pi (3.14159265) then the CF is:
3: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4 ...
The next step is create a table of CFs for square roots, then you compare the values in the table to the fractional part of the target value (here: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4...). For example, let's say your table had square roots for 1-99 only. Then you would find the closest match would be sqrt(51) which has a CF of 7: 7,14 repeating. The 7,14 is the closest to pi's 7,15. Thus your answer would be:
sqrt(51)-4
As the closest approximation given a b < 100 which is off by 0.00016. If you allow larger b's then you could get a better approximation.
The advantage of using CFs is that it is faster than working in, say, doubles or using floating point. For example, in the above case you only have to compare two integers (7 and 15), and you can also use indexing to make finding the closest entry in the table very fast.
This can be done using mixed integer quadratic programming very efficiently (though there are no run-time guarantees as MIQP is NP-complete.)
Define:
d := the real number you wish to approximate
b, a := two integers such that a + sqrt(b) is as "close" to d as possible
r := (d - a)^2 - b, is the residual of the approximation
The goal is to minimize r. Setup your quadratic program as:
x := [ s b t ]
D := | 1 0 0 |
| 0 0 0 |
| 0 0 0 |
c := [0 -1 0]^T
with the constraint that s - t = f (where f is the fractional part of d)
and b,t are integers (s is not)
This is a convex (therefore optimally solvable) mixed integer quadratic program since D is positive semi-definite.
Once s,b,t are computed, simply derive the answer using b=b, s=d-a and t can be ignored.
Your problem may be NP-complete, it would be interesting to prove if so.
Some of the previous answers use methods that are of time or space complexity O(n), where n is the largest “small number” that will be accepted. By contrast, the following method is O(sqrt(n)) in time, and O(1) in space.
Suppose that positive real number r = x + y, where x=floor(r) and 0 ≤ y < 1. We want to approximate r by a number of the form a + √b. If x+y ≈ a+√b then x+y-a ≈ √b, so √b ≈ h+y for some integer offset h, and b ≈ (h+y)^2. To make b an integer, we want to minimize the fractional part of (h+y)^2 over all eligible h. There are at most √n eligible values of h. See following python code and sample output.
import math, random
def findb(y, rhi):
bestb = loerror = 1;
for r in range(2,rhi):
v = (r+y)**2
u = round(v)
err = abs(v-u)
if round(math.sqrt(u))**2 == u: continue
if err < loerror:
bestb, loerror = u, err
return bestb
#random.seed(123456) # set a seed if testing repetitively
f = [math.pi-3] + sorted([random.random() for i in range(24)])
print (' frac sqrt(b) error b')
for frac in f:
b = findb(frac, 12)
r = math.sqrt(b)
t = math.modf(r)[0] # Get fractional part of sqrt(b)
print ('{:9.5f} {:9.5f} {:11.7f} {:5.0f}'.format(frac, r, t-frac, b))
(Note 1: This code is in demo form; the parameters to findb() are y, the fractional part of r, and rhi, the square root of the largest small number. You may wish to change usage of parameters. Note 2: The
if round(math.sqrt(u))**2 == u: continue
line of code prevents findb() from returning perfect-square values of b, except for the value b=1, because no perfect square can improve upon the accuracy offered by b=1.)
Sample output follows. About a dozen lines have been elided in the middle. The first output line shows that this procedure yields b=51 to represent the fractional part of pi, which is the same value reported in some other answers.
frac sqrt(b) error b
0.14159 7.14143 -0.0001642 51
0.11975 4.12311 0.0033593 17
0.12230 4.12311 0.0008085 17
0.22150 9.21954 -0.0019586 85
0.22681 11.22497 -0.0018377 126
0.25946 2.23607 -0.0233893 5
0.30024 5.29150 -0.0087362 28
0.36772 8.36660 -0.0011170 70
0.42452 8.42615 0.0016309 71
...
0.93086 6.92820 -0.0026609 48
0.94677 8.94427 -0.0024960 80
0.96549 11.95826 -0.0072333 143
0.97693 11.95826 -0.0186723 143
With the following code added at the end of the program, the output shown below also appears. This shows closer approximations for the fractional part of pi.
frac, rhi = math.pi-3, 16
print (' frac sqrt(b) error b bMax')
while rhi < 1000:
b = findb(frac, rhi)
r = math.sqrt(b)
t = math.modf(r)[0] # Get fractional part of sqrt(b)
print ('{:11.7f} {:11.7f} {:13.9f} {:7.0f} {:7.0f}'.format(frac, r, t-frac, b,rhi**2))
rhi = 3*rhi/2
frac sqrt(b) error b bMax
0.1415927 7.1414284 -0.000164225 51 256
0.1415927 7.1414284 -0.000164225 51 576
0.1415927 7.1414284 -0.000164225 51 1296
0.1415927 7.1414284 -0.000164225 51 2916
0.1415927 7.1414284 -0.000164225 51 6561
0.1415927 120.1415831 -0.000009511 14434 14641
0.1415927 120.1415831 -0.000009511 14434 32761
0.1415927 233.1415879 -0.000004772 54355 73441
0.1415927 346.1415895 -0.000003127 119814 164836
0.1415927 572.1415909 -0.000001786 327346 370881
0.1415927 911.1415916 -0.000001023 830179 833569
I do not know if there is any kind of standard algorithm for this kind of problem, but it does intrigue me, so here is my attempt at developing an algorithm that finds the needed approximation.
Call the real number in question r. Then, first I assume that a can be negative, in that case we can reduce the problem and now only have to find a b such that the decimal part of sqrt(b) is a good approximation of the decimal part of r. Let us now write r as r = x.y with x being the integer and y the decimal part.
Now:
b = r^2
= (x.y)^2
= (x + .y)^2
= x^2 + 2 * x * .y + .y^2
= 2 * x * .y + .y^2 (mod 1)
We now only have to find an x such that 0 = .y^2 + 2 * x * .y (mod 1) (approximately).
Filling that x into the formulas above we get b and can then calculate a as a = r - b. (All of these calculations have to be carefully rounded of course.)
Now, for the time being I am not sure if there is a way to find this x without brute forcing it. But even then, one can simple use a simple loop to find an x good enough.
I am thinking of something like this(semi pseudo code):
max_diff_low = 0.01 // arbitrary accuracy
max_diff_high = 1 - max_diff_low
y = r % 1
v = y^2
addend = 2 * y
x = 0
while (v < max_diff_high && v > max_diff_low)
x++;
v = (v + addend) % 1
c = (x + y) ^ 2
b = round(c)
a = round(r - c)
Now, I think this algorithm is fairly efficient, while even allowing you to specify the wished accuracy of the approximation. One thing that could be done that would turn it into an O(1) algorithm is calculating all the x and putting them into a lookup table. If one only cares about the first three decimal digits of r(for example), the lookup table would only have 1000 values, which is only 4kb of memory(assuming that 32bit integers are used).
Hope this is helpful at all. If anyone finds anything wrong with the algorithm, please let me know in a comment and I will fix it.
EDIT:
Upon reflection I retract my claim of efficiency. There is in fact as far as I can tell no guarantee that the algorithm as outlined above will ever terminate, and even if it does, it might take a long time to find a very large x that solves the equation adequately.
One could maybe keep track of the best x found so far and relax the accuracy bounds over time to make sure the algorithm terminates quickly, at the possible cost of accuracy.
These problems are of course non-existent, if one simply pre-calculates a lookup table.

Calculating powers (e.g. 2^11) quickly [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
The most efficient way to implement an integer based power function pow(int, int)
How can I calculate powers with better runtime?
E.g. 2^13.
I remember seeing somewhere that it has something to do with the following calculation:
2^13 = 2^8 * 2^4 * 2^1
But I can't see how calculating each component of the right side of the equation and then multiplying them would help me.
Any ideas?
Edit: I did mean with any base. How do the algorithms you've mentioned below, in particular the "Exponentation by squaring", improve the runtime / complexity?
There is a generalized algorithm for this, but in languages that have bit-shifting, there's a much faster way to compute powers of 2. You just put in 1 << exp (assuming your bit shift operator is << as it is in most languages that support the operation).
I assume you're looking for the generalized algorithm and just chose an unfortunate base as an example. I will give this algorithm in Python.
def intpow(base, exp):
if exp == 0:
return 1
elif exp == 1:
return base
elif (exp & 1) != 0:
return base * intpow(base * base, exp // 2)
else:
return intpow(base * base, exp // 2)
This basically causes exponents to be able to be calculated in log2 exp time. It's a divide and conquer algorithm. :-) As someone else said exponentiation by squaring.
If you plug your example into this, you can see how it works and is related to the equation you give:
intpow(2, 13)
2 * intpow(4, 6)
2 * intpow(16, 3)
2 * 16 * intpow(256, 1)
2 * 16 * 256 == 2^1 * 2^4 * 2^8
Use bitwise shifting. Ex. 1 << 11 returns 2^11.
Powers of two are the easy ones. In binary 2^13 is a one followed by 13 zeros.
You'd use bit shifting, which is a built in operator in many languages.
You can use exponentiation by squaring. This is also known as "square-and-multiply" and works for bases != 2, too.
If you're not limiting yourself to powers of two, then:
k^2n = (k^n)^2
The fastest free algorithm I know of is by Phillip S. Pang, Ph.D and can the source code can be found here.
It uses table-driven decomposition, by which it is possible to make exp() function, which is 2-10 times faster, then native exp() of Pentium(R) processor.

Resources