As a programming assignment in a Cryptography course I have the following problem:
Read a video file, divide it in 1KB blocks, grab the last block, get it's SHA256 sum, append that sum to the second to last block, get the SHA256 sum of the resulting block, and so on and so forth... The answer to the problem is the last SHA256 sum you get from this chain. The answer yielded by this algorithm applied to a certain video is this SHA256 sum: '5b96aece304a1422224f9a41b228416028f9ba26b0d1058f400200f06a589949'.
I understand the problem, but I cannot solve it using Ruby.
This is my Ruby code:
require 'digest/sha2'
def chunker
video, array = File.new('video.mp4', 'r'), []
(0..video.size/1024).each { |i| array[i] = video.read 1024 }
array
end
video_chunks, sha, digest = chunker, '', Digest::SHA2.new
video_chunks.reverse_each { |chunk| sha = (digest << chunk+sha).to_s }
puts sha
I'm basically dividing the video into 1024 byte chunks, then traversing it in reverse, getting the SHA256 sum of (currentBlock + lastSha) and saving it to a variable, which I output at the end of this reverse traversal.
This does not work.
The SHA256 sum of the first chunk (which doesn't have any past sha appended to it) is 'f2e208617302c6b089f52b6f27f78a7171b4424c1191989bbf86ed5ab0cbccee', I know this from a Java program which does the exact same problem. That sum is correct. But the second SHA256 sum, which is the SHA265 result of the appending of 'f2e2...' to the second to last block should be '34b6...' and it is outputting another thing. The problem occurs in the code "digest << chunk+sha". Somehow, when appending, something happens and the resulting sha is incorrect.
Any ideas? :(
The sha should not be generated via .to_s, you need the binary string version. In addition you are feeding more and more blocks into the same digest, whilst your exercise is specifically about a process for doing the same thing but under your own control (i.e. in your own code).
So instead of maintaining a digest object, and calling .to_s on it to fetch each sub-hash, you should calculate the hash fresh each time using the Digest::SHA2.digest( data ) class method
Try this instead:
video_chunks, sha = chunker, ''
video_chunks.reverse_each { |chunk| sha = Digest::SHA2.digest( chunk+sha ) }
# Convert to hex:
puts sha.unpack('H*').first
Related
Summary
Looking at the other questions that are somewhat in line with this is not helping, because I'm still opening the file line-by-line so I'm not running out of memory on the large file. In fact my memory usage is pretty low, but it is taking a really long time to create the smaller file so that I can search and concatenate the other CSV into the file.
Question
It has been 5 days and I'm not sure how far I have left to go, but it hasn't exited the foreach line of the main file, there are 17.8 million records in the csv file. Is there a faster way to handle this processing in ruby? Anything I can do to the MacOSX to optimize it? Any advice would be great.
# # -------------------------------------------------------------------------------------
# # USED TO GET ID NUMBERS OF THE SPECIFIC ITEMS THAT ARE NEEDED
# # -------------------------------------------------------------------------------------
etas_title_file = './HathiTrust ETAS Titles.csv'
oclc_id_array = []
angies_csv = []
CSV.foreach(etas_title_file ,'r', {:headers => true, :header_converters => :symbol}) do |row|
oclc_id_array << row[:oclc]
angies_csv << row.to_h
end
oclc_id_array.uniq!
# -------------------------------------------------------------------------------------
# RUN ONCE IF DATABASE IS NOT POPULATED
# -------------------------------------------------------------------------------------
headers = %i[htid access rights ht_bib_key description source source_bib_num oclc_num isbn issn lccn title imprint rights_reason_code rights_timestamp us_gov_doc_flag rights_date_used pub_place lang bib_fmt collection_code content_provider_code responsible_entity_code digitization_agent_code access_profile_code author]
remove_keys = %i[access rights description source source_bib_num isbn issn lccn title imprint rights_reason_code rights_timestamp us_gov_doc_flag rights_date_used pub_place lang bib_fmt collection_code content_provider_code responsible_entity_code digitization_agent_code access_profile_code author]
new_hathi_csv = []
processed_keys = []
CSV.foreach('./hathi_full_20200401.txt' ,'r', {:headers => headers, :col_sep => "\t", quote_char: "\0" }) do |row|
next unless oclc_id_array.include? row[:oclc_num]
next if processed_keys.include? row[:oclc_num]
puts "#{row[:oclc_num]} included? #{oclc_id_array.include? row[:oclc_num]}"
new_hathi_csv << row.to_h.except(*remove_keys)
processed_keys << row[:oclc_num]
end
As far as I was able to determine, OCLC IDs are alphanumeric. This means we want to use a Hash to store these IDs. A Hash has a general lookup complexity of O(1), while your unsorted Array has a lookup complexity of O(n).
If you use an Array, you worst case lookup is 18 million comparisons (to find a single element, Ruby has to go through all 18 million IDs), while with a Hash it will be one comparison. To put it simply: using a Hash will be millions of times faster than your current implementation.
The pseudocode below will give you an idea how to proceed. We will use a Set, which is like a Hash, but handy when all you need to do is check for inclusion:
oclc_ids = Set.new
CSV.foreach(...) {
oclc_ids.add(row[:oclc]) # Add ID to Set
...
}
# No need to call unique on a Set.
# The elements in a Set are always unique.
processed_keys = Set.new
CSV.foreach(...) {
next unless oclc_ids.include?(row[:oclc_num]) # Extremely fast lookup
next if processed_keys.include?(row[:oclc_num]) # Extremely fast lookup
...
processed_keys.add(row[:oclc_num])
}
I am trying to understand the proof of work algorithm. I computed a block header (which includes the nonce):
"02000000aaf8ab82362344f49083ee4edef795362cf135293564c4070000000000000000c009bb6222e9bc4cdb8f26b2e8a2f8d163509691a4038fa692abf9a474c9b21476800755c02e17181fe6c1c3"
I have to apply SHA256 to this twice. The correct answer is supposed to be:
"00000000000000001354e21fea9c1ec9ac337c8a6c0bda736ec1096663383429"
I tried pack, unpack, hex, etc., but I can't get this output. What is the correct Ruby code to convert the input to the output using SHA256?
header_hex = "02000000aaf8ab82362344f49083ee4edef795362cf135293564c4070000000000000000c009bb6222e9bc4cdb8f26b2e8a2f8d163509691a4038fa692abf9a474c9b21476800755c02e17181fe6c1c3"
# Decode header hex into binary string
header = [header_hex].pack("H*")
# Apply SHA256 twice
require "digest"
d1 = Digest::SHA256.digest(header)
d2 = Digest::SHA256.digest(d1)
# Convert to hex
result = d2.unpack("H*").join
# => "293438636609c16e73da0b6c8a7c33acc91e9cea1fe254130000000000000000"
Oops, for some reason the result is somewhat "backwards". Perhaps it is a byte-ordering issue? Let's try that again with the binary data reversed:
result = d2.reverse.unpack("H*").join
# => "00000000000000001354e21fea9c1ec9ac337c8a6c0bda736ec1096663383429"
Bingo!
Edit: Just to clarify, this was a trial-and-error solution. I don't have any special insight into the proof of work algorithm!
I am trying to generate a file in ruby that has a specific size. The content doesn't matter.
Here is what I got so far (and it works!):
File.open("done/#{NAME}.txt", 'w') do |f|
contents = "x" * (1024*1024)
SIZE.to_i.times { f.write(contents) }
end
The problem is: Once I zip or rar this file the created archive is only a few kb small. I guess thats because the random data in the file got compressed.
How do I create data that is more random as if it were just a normal file (for example a movie file)? To be specific: How to create a file with random data that keeps its size when archived?
You cannot guarantee an exact file size when compressing. However, as you suggest in the question, completely random data does not compress.
You can generate a random String using most random number generators. Even simple ones are capable of making hard-to-compress data, but you would have to write your own string-creation code. Luckily for you, Ruby comes with a built-in library that already has a convenient byte-generating method, and you can use it in a variation of your code:
require 'securerandom'
one_megabyte = 2 ** 20 # or 1024 * 1024, if you prefer
# Note use 'wb' mode to prevent problems with character encoding
File.open("done/#{NAME}.txt", 'wb') do |f|
SIZE.to_i.times { f.write( SecureRandom.random_bytes( one_megabyte ) ) }
end
This file is not going to compress much, if at all. Many compressors will detect that and just store the file as-is (making a .zip or .rar file slightly larger than the original).
For a given string size N and compression method c (e.g., from the rubyzip, libarchive or seven_zip_ruby gems), you want to find a string str such that:
str.size == c(str).size == N
I'm doubtful that you can be assured of finding such a string, but here's a way that should come close:
Step 0: Select a number m such that m > N.
Step 1: Generate a random string s with m characters.
Step 2: Compute str = c(str). If str.size <= N, increase m and repeat Step 1; else go to Step 3.
Step 3: Return str[0,N].
As a personal challenge I'm trying to implement the SIMON block cipher in Ruby. I'm running into some issues finding the best way to work with the data. The full code related to this question is located at: https://github.com/Rami114/Personal/blob/master/Simon/Simon.rb
SIMON requires both XOR, shift and circular shift operations, the last of which is forcing me to work with BigNums so I can perform the left circular shift with math rather than a more complex/slower double loop on byte arrays.
Is there a better way to convert a string to a BigNum and back again.
String -> BigNum (where N is 64 and pt is a string of plaintext)
pt = pt.chars.each_slice(N/8).map {|x| x.join.unpack('b*')[0].to_i(2)}.to_a
So I break the string into individual characters, slice into N-sized arrays (the word size in SIMON) and unpack each set into a BigNum. That appears to work fine and I can convert it back.
Now my SIMON code is currently broken, but that's more the math I think/hope and not the code. The conversion back is (where ct is an array of bignums representing the ciphertext):
ct.map { |x| [x.to_s(2).rjust(128,'0')].pack('b*') }.join
I seem to have to right-justify pad the string as bignums are of undefined width so I have no leading 0s. Unfortunately the pack requires the defined with to have sensible output.
Is this a valid method of conversion? Is there a better way? I'm not sure on either count and hoping someone here can help out.
E: For #torimus, the circular shift implementation I'm using (From link above)
def self.lcs (bytes, block_size, shift)
((bytes << shift) | (bytes >> (block_size - shift))) & ((1<< block_size)-1)
end
If you would be equally happy with unpack('B*') with msb first binary numbers (which you could well be if all your processing is circular), then you could also use .unpack('Q>') instead of .unpack('B*')[0].to_i(2) for generating pt:
pt = "qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890!#"
# Your version (with 'B' == msb first) for comparison:
pt_nums = pt.chars.each_slice(N/8).map {|x| x.join.unpack('B*')[0].to_i(2)}.to_a
=> [8176115190769218921, 8030025283835160424, 7668342063789995618, 7957105551900562521,
6145530372635706438, 5136437062280042563, 6215616529169527604, 3834312847369707840]
# unpack to 64-bit unsigned integers directly
pt_nums = pt.unpack('Q>8')
=> [8176115190769218921, 8030025283835160424, 7668342063789995618, 7957105551900562521,
6145530372635706438, 5136437062280042563, 6215616529169527604, 3834312847369707840]
There are no native 128-bit pack/unpacks to return in the other direction, but you can use Fixnum to solve this too:
split128 = 1 << 64
ct = pt # Just to show round-trip
ct.map { |x| [ x / split128, x % split128 ].pack('Q>2') }.join
=> "\x00\x00\x00\x00\x00\x00\x00\x00qwertyui . . . " # truncated
This avoids a lot of the temporary stages on your code, but at the expense of using a different byte coding - I don't know enough about SIMON to comment whether this is adaptable to your needs.
Upon creating an instance of a given ActiveRecord model object, I need to generate a shortish (6-8 characters) unique string to use as an identifier in URLs, in the style of Instagram's photo URLs (like http://instagram.com/p/P541i4ErdL/, which I just scrambled to be a 404) or Youtube's video URLs (like http://www.youtube.com/watch?v=oHg5SJYRHA0).
What's the best way to go about doing this? Is it easiest to just create a random string repeatedly until it's unique? Is there a way to hash/shuffle the integer id in such a way that users can't hack the URL by changing one character (like I did with the 404'd Instagram link above) and end up at a new record?
Here's a good method with no collision already implemented in plpgsql.
First step: consider the pseudo_encrypt function from the PG wiki.
This function takes a 32 bits integer as argument and returns a 32 bits integer that looks random to the human eye but uniquely corresponds to its argument (so that's encryption, not hashing). Inside the function, you may change the formula: (((1366.0 * r1 + 150889) % 714025) / 714025.0) with another function known only by you that produces a result in the [0..1] range (just tweaking the constants will probably be good enough, see below my attempt at doing just that). Refer to the wikipedia article on the Feistel cypher for more theorical explanations.
Second step: encode the output number in the alphabet of your choice. Here's a function that does it in base 62 with all alphanumeric characters.
CREATE OR REPLACE FUNCTION stringify_bigint(n bigint) RETURNS text
LANGUAGE plpgsql IMMUTABLE STRICT AS $$
DECLARE
alphabet text:='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
base int:=length(alphabet);
_n bigint:=abs(n);
output text:='';
BEGIN
LOOP
output := output || substr(alphabet, 1+(_n%base)::int, 1);
_n := _n / base;
EXIT WHEN _n=0;
END LOOP;
RETURN output;
END $$
Now here's what we'd get for the first 10 URLs corresponding to a monotonic sequence:
select stringify_bigint(pseudo_encrypt(i)) from generate_series(1,10) as i;
stringify_bigint
------------------
tWJbwb
eDUHNb
0k3W4b
w9dtmc
wWoCi
2hVQz
PyOoR
cjzW8
bIGoqb
A5tDHb
The results look random and are guaranteed to be unique in the entire output space (2^32 or about 4 billion values if you use the entire input space with negative integers as well).
If 4 billion values was not wide enough, you may carefully combine two 32 bits results to get to 64 bits while not loosing unicity in outputs. The tricky parts are dealing correctly with the sign bit and avoiding overflows.
About modifying the function to generate your own unique results: let's change the constant from 1366.0 to 1367.0 in the function body, and retry the test above. See how the results are completely different:
NprBxb
sY38Ob
urrF6b
OjKVnc
vdS7j
uEfEB
3zuaT
0fjsab
j7OYrb
PYiwJb
Update: For those who can compile a C extension, a good replacement for pseudo_encrypt() is range_encrypt_element() from the permuteseq extension, which has of the following advantages:
works with any output space up to 64 bits, and it doesn't have to be a power of 2.
uses a secret 64-bit key for unguessable sequences.
is much faster, if that matters.
You could do something like this:
random_attribute.rb
module RandomAttribute
def generate_unique_random_base64(attribute, n)
until random_is_unique?(attribute)
self.send(:"#{attribute}=", random_base64(n))
end
end
def generate_unique_random_hex(attribute, n)
until random_is_unique?(attribute)
self.send(:"#{attribute}=", SecureRandom.hex(n/2))
end
end
private
def random_is_unique?(attribute)
val = self.send(:"#{attribute}")
val && !self.class.send(:"find_by_#{attribute}", val)
end
def random_base64(n)
val = base64_url
val += base64_url while val.length < n
val.slice(0..(n-1))
end
def base64_url
SecureRandom.base64(60).downcase.gsub(/\W/, '')
end
end
Raw
user.rb
class Post < ActiveRecord::Base
include RandomAttribute
before_validation :generate_key, on: :create
private
def generate_key
generate_unique_random_hex(:key, 32)
end
end
You can hash the id:
Digest::MD5.hexdigest('1')[0..9]
=> "c4ca4238a0"
Digest::MD5.hexdigest('2')[0..9]
=> "c81e728d9d"
But somebody can still guess what you're doing and iterate that way. It's probably better to hash on the content