Encode string to an specified length using any algo - algorithm

Is there a way to compress/encode string to specified length(8/10 character).
I have a combination of secret key and a numeric value of 16 digit, and I want to create a unique id with combination of these both. which length should be between 8-12, and it should not change if combination is same.
Please suggest a way.

If it's 16 decimal digits and your string can contain any characters, then sure. If you want ten characters out, then you'd need 40 different characters. 4010 > 1016. Or for nine characters out, you need 60 different characters. 609 > 1016. E.g. some subset of the upper case letters, lower case letters, and digits (62 to choose 40 or 60 from). Then it is simply a matter of base conversion either way. Convert from base 10 to base 40 or 60, and then back.
Many languages already have Base-64 coding routines, which will get you to nine characters.
Eight is a problem, since you would need 100 characters (1008 == 1016), and there are only 95 printable ASCII characters.

You could use a secure hash function, like sha512, and truncate the resulting hex string to the desired length.
If you want slightly more entropy, you can base64 encode it before truncating.

Related

How many numbers can we store with 1 bit?

I want to know how many characters or numbers can I store in 1 bit only. It will be more helpful if you tell it in octal, hexadecimal.
I want to know how many characters or numbers can I store in 1 bit only.
It is not practical to use a single bit to store numbers or characters. However, you could say:
One integer provided that the integer is in the range 0 to 1.
One ASCII character provided that the character is either NUL (0x00) or SOH (0x01).
The bottom line is that a single bit has two states: 0 and 1. Any value domain with more that two values in the domain cannot be represented using a single bit.
It will be more helpful if you tell it in octal, hexadecimal.
That is not relevant to the problem. Octal and hexadecimal are different textual representations for numeric data. They make no difference to the meaning of the numbers, or (in most cases1) the way that you represent the numbers in a computer.
1 - The exception is when you are representing numbers as text; e.g. when you represent the number 42 in a text document as the character '4' followed by the character '2'.
A bit is a "binary digit", or a value from a set of size two. If you have one or more bits, you raise 2 to the power of the number of bits. So, 2ยน gives 2. The field in Mathematics is called combinatorics.

Derive password string from random bytes

I have 32 bytes. I need to derive from them a password string (which will hopefully work on most websites), given certain restrictions.
All characters must be in one of { A-Z, a-z, 0-9, !##$% }.
The string will have at least two characters from each of the above sets.
The string must be exactly 15 characters long.
Currently I'm using the bytes to seed a non-cryptographically-secure PRNG, which I'm then using to:
get two random characters from each of the sets and push them.
fill the rest of the string with randomly chosen characters from any of the sets.
shuffle the string.
Is this valid, and is there a simpler way?

How to encode a number as a string such that the lexicographic order of the generated string is in the same order as the numeric order

For eg. if we have two strings 2 and 10, 10 will come first if we order lexicographically.
The very trivial sol will be to repeat a character n number of time.
eg. 2 can be encoded as aa
10 as aaaaaaaaaa
This way the lex order is same as the numeric one.
But, is there a more elegant way to do this?
When converting the numbers to strings make sure that all the strings have the same length, by appending 0s in the front if necessary. So 2 and 10 would be encoded as "02" and "10".
While kjampani's solution is probably the best and easiest in normal applications, another way which is more space-efficient is to prepend every string with its own length. Of course, you need to encode the length in a way which is also consistently sorted.
If you know all the strings are fairly short, you can just encode their length as a fixed-length base-X sequence, where X is the number of character codes you're willing to use (popular values are 64, 96, 255 and 256.) Note that you have to use the character codes in lexicographical order, so normal base64 won't work.
One variable-length order-preserving encoding is the one used by UTF-8. (Not UTF-8 directly, which has a couple of corner cases which will get in the way, but the same encoding technique. The order-preserving property of UTF-8 is occasionally really useful.) The full range of such compressed codes can encode values up to 42 bits long, with an average of five payload bits per byte. That's sufficient for pretty long strings; four terabyte long strings are pretty rare in the wild; but if you need longer, it's possible, too, by extending the size prefix over more than one byte.
Break the string into successive sub strings of letters and numbers and then sort by comparing each substring as an integer if it's an numeric string
"aaa2" ---> aaa + 2
"aaa1000" ---> aaa + 1000
aaa == aaa
Since they're equal, we continue:
1000 > 2
Hence, aaa1000 > aaa2.

Need an maths algorithm to encode number in big integer to integer

I want to convert a number value of 100 digits into lessthan 10 digits and vice versa.
So I pass that encoded number to mobile user and on getting back can make 100 digits number again.
I want to use it in PHP, .NET or JS.
But before that I need an algorithm for that.
I have some idea to use simple divide-subtract and add-multiply options in my mind to implement. But need some more secure than that.
What you're asking for is impossible. You are trying to pigeonhole 10^100 items into 10^10 boxes. Some box will get more than one item and so it's impossible to invert back to "the" original item.
You could encode the 100-digit base-10 numbers as a 56-digit base-62 number (use uppercase and lowercase Roman alphabet and digits 0-9). The math here is 100 * log(10) / log(62).
To encode using less than ten characters from some alphabet, you need an alphabet with ~2^34 symbols. The math here is 100 * log(10) / log(number of symbols). Good luck with that.
If you have more than 10 000 000 000 different possible values in the 100 digit number you can not possibly map that to a 10 digit number and reliably map back to the original number.
A 100 digit number, I assume this is a base ten number, When talking about numbers on computers talk of 'digits' is almost meaningless.
If you actually mean a 100bit integer, then this wont easily fit into a single 64bit integer ( range +/- 9,223,372,036,854,775,808 ) then you have not phrased your question all that well. And no amount of compression or encoding will let you represent 100bits using no more than 10bits.
If you mean 100 figures in base ten, then you are dealing with bignums so should probably just treat them as bytes and use a bignum library.
100 base ten figures is still less than 512 bits.
Assuming that the 100-digit number is base 10, then if my math is not wrong you'll need 10 base 100 digits to represent the same number. So instead of using just characters from 0-9, you'll need to expand the characters to include other glyphs, including upper-case and lower-case letters, etc., to complete a 100 character alphabet. OK, my math is wrong, so disregard this, but consider the next paragraph.
Another thought is to use a hashing algorithm to derive a 10-byte hash from your 100-digit number and use that as key in a server-side database (hash-table). No encoding/decoding, just send the key to the mobile client, the mobile client uses the key to fetch the 100-digit number from the server.

how to represent a n-byte array in less than 2*n characters

given that a n-byte array can be represented as a 2*n character string using hex, is there a way to represent the n-byte array in less than 2*n characters?
for example, typically, an integer(int32) can be considered as a 4-byte array of data
The advantage of hex is that splitting an 8-bit byte into two equal halves is about the simplest thing you can do to map a byte to printable ASCII characters. More efficient methods consider multiple bytes as a block:
Base-64 uses 64 ASCII characters to represent 6 bits at a time. Every 3 bytes (i.e. 24 bits) are split into 4 6-bit base-64 digits, where the "digits" are:
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/
(and if the input is not a multiple of 3 bytes long, a 65th character, "=", is used for padding at the end). Note that there are some variant forms of base-64 use different characters for the last two "digits".
Ascii85 is another representation, which is somewhat less well-known, but commonly used: it's often the way that binary data is encoded within PostScript and PDF files. This considers every 4 bytes (big-endian) as an unsigned integer, which is represented as a 5-digit number in base 85, with each base-85 digit encoded as ASCII code 33+n (i.e. "!" for 0, up to "u" for 84) - plus a special case where the single character "z" may be used (instead of "!!!!!") to represent 4 zero bytes.
(Why 85? Because 845 < 232 < 855.)
yes, using binary (in which case it takes n bytes, not surprisingly), or using any base higher than 16, a common one is base 64.
It might depend on the exact numbers you want to represent. For instance, the number 9223372036854775808, which requres 8 bytes to represent in binary, takes only 4 bytes in ascii, if you use the product of primes representation (which is "2^63").
How about base-64?
It all depends on what characters you're willing to use in your encoding (i.e. representation).
Base64 fits 6 bits in each character, which means that 3 bytes will fit in 4 characters.
Using 65536 of about 90000 defined Unicode characters you may represent binary string in N/2 characters.
Yes. Use more characters than just 0-9 and a-f. A single character (assuming 8-bit) can have 256 values, so you can represent an n-byte number in n characters.
If it needs to be printable, you can just choose some set of characters to represent various values. A good option is base-64 in that case.

Resources