How to convert 32 bit integer to network byte order? - ruby

How to convert the 32 bit integer to network byte order.
What is the right way to do that?
[1024].pack("N")
OR
[1,0,2,4].pack("N")
Thanks

To start, look at the output of each:
>> [1024].pack("N")
=> "\000\000\004\000"
>> [1,0,2,4].pack("N")
=> "\000\000\000\001"
Note what the second is missing:
>> [1,0,2,4].pack("NNNN")
=> "\000\000\000\001\000\000\000\000\000\000\000\002\000\000\000\004"

Related

Why are my byte arrays not different even though print() says they are?

I am new to python so please forgive me if I'm asking a dumb question. In my function I generate a random byte array for a given number of bytes called "input_data", then I add bytewise some bit errors and store the result in another byte array called "output_data". The print function shows that it works exactly as expected, there are different bytes. But if I compare the byte arrays afterwards they seem to be identical!
def simulate_ber(packet_length, ber, verbose=False):
# generate input data
input_data = bytearray(random.getrandbits(8) for _ in xrange(packet_length))
if(verbose):
print(binascii.hexlify(input_data)+" <-- simulated input vector")
output_data = input_data
#add bit errors
num_errors = 0
for byte in range(len(input_data)):
error_mask = 0
for bit in range(0,7,1):
if(random.uniform(0, 1)*100 < ber):
error_mask |= 1 << bit
num_errors += 1
output_data[byte] = input_data[byte] ^ error_mask
if(verbose):
print(binascii.hexlify(output_data)+" <-- output vector")
print("number of simulated bit errors: " + str(num_errors))
if(input_data == output_data):
print ("data identical")
number of packets: 1
bytes per packet: 16
simulated bit error rate: 5
start simulation...
0d3e896d61d50645e4e3fa648346091a <-- simulated input vector
0d3e896f61d51647e4e3fe648346001a <-- output vector
number of simulated bit errors: 6
data identical
Where is the bug? I am sure the problem is somewhere between my ears...
Thank you in advance for your help!
output_data = input_data
Python is a referential language. When you do the above, both variables now refer to the same object in memory. e.g:
>>> y=['Hello']
>>> x=y
>>> x.append('World!')
>>> x
['Hello', 'World!']
>>> y
['Hello', 'World!']
Cast output_data as a new bytearray and you should be good:
output_data = bytearray(input_data)

Uuencode vs Base64 encode: why do pack('m') and pack('u') in Ruby return strings of different lengths?

According to the specs they should be same length, and a string of length 36 should translate to a string of length 48, for example:
bin = "123456789012345678901234567890123456"
[49] pry(main)> [bin].pack("m").length
=> 49
[50] pry(main)> [bin].pack("u").length
=> 50
[54] pry(main)> [bin].pack("m")
=> "MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTIzNDU2\n"
[55] pry(main)> [bin].pack("u")
=> "D,3(S-#4V-S#Y,\#$R,S0U-C<X.3`Q,C,T-38W.#DP,3(S-#4V\n"
Compensating for the "funny newline" we get the proper length in the base64 encoding (the pack('m') variant), but I don't know how to get the line length right in the uuencoding (the pack('u') variant).
I really need that uuencoded string to be 48 chars long :) what's the issue here?
Update
I did my own uuencode implementation, created a method that generates bitmap and then split the bitmap etc to make a uuencode implementation, as the provider of the specification helpfully explained in the spec
def to_bitmap bytes
bytes.scan(/./).map{|b| b.ord.to_s(2).rjust(8, "0")}.join
end
[5] pry(main)> to_bitmap(str).scan(/.{6}/).map{|b| (from_bitmap("00"+b).ord+0x20).chr }.join
=> ",3(S-#4V-S#Y,\#$R,S0U-C<X.3 Q,C,T-38W.#DP,3(S-#4V"
[6] pry(main)> to_bitmap(str).scan(/.{6}/).map{|b| (from_bitmap("00"+b).ord+0x20).chr }.join.length
=> 48
and I assume this is the good thing, it's kind of like uuencode, but differs in a couple of places:
,3(S-#4V-S#Y,\#$R,S0U-C<X.3 Q,C,T-38W.#DP,3(S-#4V
D,3(S-#4V-S#Y,\#$R,S0U-C<X.3`Q,C,T-38W.#DP,3(S-#4V\n
Wierd, I guess it's the specification I'm implementing is using a "uuencode" and not quite uuencode though they claim that generic software libraries support this format, am I missing something or does this seem like bullshit and workaround for somebodies half-assed implementation of uuencode?

How to convert a hexadecimal string back to it's 'SecureRandom.random_bytes' binary?

In Ruby, I can create a 16 byte binary and convert it to a hexadecimal string:
key = SecureRandom.random_bytes(16) # => "hN\xDB\xAD\xAF\xB3R\xC0`\xB19\x1D\x19.\xD3I"
hex_key = key.each_byte.map { |byte| '%02x' % byte }.join # => "684edbadafb352c060b1391d192ed349"
In PHP and Javascript I can convert the the hexadecimal string back to it's 16 byte binary.
PHP:
<?php
hex2bin("684edbadafb352c060b1391d192ed349");
?>
Javascript via CryptoJS:
CryptoJS.enc.Hex.parse("684edbadafb352c060b1391d192ed349");
But how do I convert the hexadecimal string back to it's 16 byte binary, using Ruby?
Is this what you are looking for?
[str].pack('H*').bytes.to_a
or just
[str].pack('H*')

How to convert negative integers to binary in Ruby

Question 1: I cannot find a way to convert negative integers to binary in the following way. I am supposed to convert it like this.
-3 => "11111111111111111111111111111101"
I tried below:
sprintf('%b', -3) => "..101" # .. appears and does not show 111111 bit.
-3.to_s(2) => "-11" # This just adds - to the binary of the positive integer 3.
Question 2: Interestingly, if I use online converter, it tells me that binary of -3 is "00101101 00110011".
What is the difference between "11111111111111111111111111111101" and "00101101 00110011"?
Packing then unpacking will convert -3 to 4294967293 (232 - 3):
[-3].pack('L').unpack('L')
=> [4294967293]
sprintf('%b', [-3].pack('L').unpack('L')[0])
# => "11111111111111111111111111111101"
sprintf('%b', [3].pack('L').unpack('L')[0])
# => "11"
Try:
> 32.downto(0).map { |n| -3[n] }.join
#=> "111111111111111111111111111111101
Note: This applies to negative number's only.

Why length of encrypted numerical value is different?

I encrypted numerical value as following.
> secret = Sestrong textcureRandom::hex(128)
> encryptor = ::ActiveSupport::MessageEncryptor.new(secret, cipher: 'aes-256-cbc')
> message1 = 1
> message1.size
=> 8
> message1.class
=> Fixnum
> encrypt_message1 = encryptor.encrypt_and_sign(message1)
> encrypt_message1.length
=> 110
> message2 = 10000
> message2.size
=> 8
> message2.class
=> Fixnum
> encrypt_message2 = encryptor.encrypt_and_sign(message2)
> encrypt_message2.length
=> 110
Above result is expected result.
Because, class of number which is less than 4611686018427387903 is Fixnum, and size of Fixnum is 8 byte.
In addition, block size of AES is 128bit(16 byte).
8 byte < 16 byte.
So, both length of encrypted value of 1 and 10000 is same.
But, following case, Length of encrypted value is different.
> message3 = 1000000000000000000000000000
> message3.size
=> 12
> message3.class
=> Bignum
> encrypt_message3 = encryptor.encrypt_and_sign(message3)
> encrypt_message3.size
=> 138
1000000000000000000000000000 is Bignum,but this size is 12 and less than 16(block size of AES).
So, I expected that length of encrypted value is same to that of Fixnum.
But, these are different...
Why are these different?
There are multiple layers to what is happening here, and you cannot explain it solely on the data size + encryption used (ie you have to factor in the transformations that happen also)
Look at: https://github.com/rails/rails/blob/29be3f5d8386fc9a8a67844fa9b7d6860574e715/activesupport/lib/active_support/message_encryptor.rb
and after that look at:
https://github.com/rails/rails/blob/29be3f5d8386fc9a8a67844fa9b7d6860574e715/activesupport/lib/active_support/message_verifier.rb which is used in the encryptor.
There are a few stages:
serializing the data you pass in (this is done using Marshal.dump if you don't specify any serializer)
base64 encoding the data.
generating a digest (ie signature) for the data.
encrypting the data+digest and storign the result + iv from the cipher used in encrypted form.
If you want to understand the generated encrypted data you basically need to trace through the code above, but:
::Base64.strict_encode64(Marshal.dump(1)).size is 8
::Base64.strict_encode64(Marshal.dump(10000)).size is 8
::Base64.strict_encode64(Marshal.dump(1000000000000000000000000000)).size is 24
But:
Marshal.dump(1).size is 4
Marshal.dump(10000).size is 6
Marshal.dump(1000000000000000000000000000).size is 17
Here is how Marshal.dump works internally: http://jakegoulding.com/blog/2013/01/15/a-little-dip-into-rubys-marshal-format/
Here is how base64 encoding works: https://blogs.oracle.com/rammenon/entry/base64_explained Look at the rules for padding.

Resources