I have a parser written in Perl which parses file of fixed length records. Part of the record consists of several strings (also fixed length), made of numbers only. Each character in the string is encoded as number, not as ASCII char. I.e., if I have string 12345, it's encoded as 01 02 03 04 05 (instead of 31 32 33 34 35).
I parse the record with unpack, and this particular part is unpacked as #array = unpack "C44", $s. Then I recover needed string with simple join, like $m = join("", #array).
I was wondering if thats an optimal way to decode. Files are quite big, millions of records, and obviously I tried to look if it's possible to optimize. Profiler shows that most of the time is spent in parsing the records (i.e., reading, writing, and other stuff is not a problem), and in parsing most of the time has been taken by these joins. I remember from other sources that join is quite efficient operation. Any ideas if it's possible to speed code more or is it optimal already? Perhaps it would be possible to avoid this intermediate array in some clever way, e.g., use pack/unpack combination instead?
Edited: code example
The code which I try to optimise looks like this:
while (read(READ, $buf, $rec_l) == $rec_l) {
my #s = unpack "A24 C44 H8", $buf;
my $msisdn = substr $s[0], 0, 11;
my $address = join("", #s[4..14]);
my $imsi = join("", #s[25..39]);
my $ts = localtime(hex($s[45]));
}
Untested (I'll come back and edit when I'm less busy) but this should work if I've done all of the math correctly, and be faster:
my ($msisdn, $address, $imsi, $ts) =
unpack "A11 x13 x3 a10 x10 a15 x5 N", $buf;
$address |= "0" x 10;
$imsi |= "0" x 15
$ts = localtime($ts);
As always in Perl, faster is less readable :-)
join("", unpack("C44", $s))
I don't believe this change would speed up your code. Everything depends on how often you call the join function to read one whole file. If you're working in chunks, try to increase the size of them. If you're doing some operation between unpack and join to this array, try to line them up with a map operation. If you post your source code it would be easier to identify the bottleneck.
I'm a pack/unpack noob, but how about skipping the join by altering your sample code like so:
my $m = unpack "H*", $s ;
quick test:
#!/usr/bin/perl
use strict ;
use Test::More tests => 1 ;
is( unpack("H*", "\x12\x34\x56"),"123456");
Related
I have a structure called s in Matlab. This is a structure with two fields a and b. The structure size is 1 x 1,620,000.
It is a very large structure (that probably takes half of the ram of my machine). This is what the structure looks like:
I am looking for an efficient way to concatenate each of the fields a and b into two separate arrays that I can then export to csv. I built the code below, to do so, but even after 12 hours running it has not even reached a quarter of the loop. Any more efficient way of doing this?
a = [];
b =[];
total_n = size(s,2);
count = 1;
while size(s,2)>0
if size(s(1).a,1)
a = [a; s(1).a];
end
if size(s(1).b,1)
b = [b; s(1).b];
end
s(1) = []; %to save memory
if mod(count,1000) == 0
fprintf('Done %2f \n', [count/total_n])
end
count = count+1;
end
s(1) = []; %to save memory
ah, but such huge misunderstanding that comment is.
if size(s) is 1 x 1,620,000, you just suddenly forced the loop to do (under the hood, you dont see it)
snew=zeros(1,size(s,2)-1) # now you use double memory
snew=s(2:end) # now you force an unnecesary copy
So not only does that line make your code require double the memory, but also in each loop, you make an unnecesary copy of a large array.
Just replace your while for a normal for loop of for ii=1:size(s,2) and then index s!
Now, you can see hopefully then why the following is equally a big mistake (not only that, but any modern MATLAB version is currently telling you this is a bad idea in your editor)
a=[]
a=[a;s(1).a]
In here in each loop you are forcing MATLAB to make a new a that is 1 bigger than before, and copy the contents of the old a there.
instead, preallocate the size of a.
As you don't know what you are going to put there, I suggest using a cell array, as each s(ii).a has a different length.
You can then, after the loop, remove all empty (isempty) cells if you want.
Managed to do it efficiently:
s= struct2cell(s);
s= squeeze(s);
a = a(1,:);
a = a';
a = vertcat(a{:});
b = a(2,:);
b = b';
b = vertcat(b{:});
I am building a file-based index for the sorted haveibeenpwned passwords text file and it got me wondering what's the fastest way to do this?
I figured a good way to build a quickly grep-able index would be split the sorted file into 256 files named with the first two hex digits (i.e. FF.txt, FE.txt, etc). I found ripgrep rg to be about 5 times faster than grep on my computer. So I tried something like this:
for i in {255..0}
do
start=$(date +%s)
hex="$(printf '%02x' $i | tr [:lower:] [:upper:])"
rg "^$hex" pwned-passwords-ntlm-ordered-by-hash-v4.txt > ntlm/$hex-ntlm.txt
echo 0x$hex completed in $(($(date +%s) - $start)) seconds
done
This is the fastest solution I could come up with. ripgrep is able to create each file in 25 seconds. So I'm looking at about 100 minutes to create this index. When I split the job in half, and run them in parallel, each pair of files is created in 80 seconds. So it seems best to just let ripgrep work its magic and work in series.
Obviously, I won't be indexing this list too often, but it's just fun to think about. Any thoughts on a faster way (aside from using a database) to index this file?
ripgrep, like any other tool that's able to work with unsorted input files at all, is the wrong tool for this job. When you're trying to grep sorted inputs, you want something that can bisect your input file to find a position in logarithmic time. For big enough inputs, even a slow O(log n) implementation will be faster than a highly optimized O(n) one.
pts-line-bisect is one such tool, though of course you're also welcome to write your own. You'll need to write it in a language with full access to the seek() syscall, which is not exposed in bash.
You are reading through the file 256 times, doing a full file scan every time. Consider an approach that reads the file once, writing each line into an open file descriptor. I'm thinking python would be an easy choice of implementation (if that's your thing). You could optimize by keeping the file open until you hit a new hex code at the beginning of the line. If you want to be even more clever, there is no need to go through the sorted file line by line. Based on Charles Duffy's hint, you could create a heuristic for sampling the file (using seek()) to get to the next hex value. Once the program has found the byte offset of the next hex value, the block of bytes can be written to the new file. However, since this is tagged as 'bash' let's keep the solution set in that domain:
while
read line
do
hex=${line:0:2}
echo $line >> ntlm/$hex-ntlm.txt
done < pwned-passwords-ntlm-ordered-by-hash-v4.txt
I wrote a Python3 script that solves fast binary-search lookups in the hash file without having to create an index. It doesn't directly address your question (indexing) but probably solves the underlying problem that you wanted to solve with an index - to quickly look up individual hashes. This script checks hundreds of passwords in seconds.
import argparse
import hashlib
parser = argparse.ArgumentParser(description='Searches passwords in https://haveibeenpwned.com/Passwords database.')
parser.add_argument('passwords', metavar='TEST', type=str, help='text file with passwords to test, one per line, utf-8')
parser.add_argument('database', metavar='DATABASE', type=str, help='the downloaded text file with sha-1:count')
args = parser.parse_args()
def search(f: object, pattern: str) -> str:
def search(left, right: int) -> str:
if left >= right:
return None
middle = (left + right) // 2
if middle == 0:
f.seek(0, 0)
test = f.readline()
else:
f.seek(middle - 1, 0)
_ = f.readline()
test = f.readline()
if test.upper().startswith(pattern):
return test
elif left == middle:
return None
elif pattern < test:
return search(left, middle)
else:
return search(middle, right)
f.seek(0, 2)
return search(0, f.tell())
fsource = open(args.passwords)
fdatabase = open(args.database)
source_lines = fsource.readlines()
for l in source_lines:
line = l.strip()
hash_object = hashlib.sha1(line.encode("utf-8"))
pattern = hash_object.hexdigest().upper()
print("%s:%s" % (line, str(search(fdatabase, pattern)).strip()))
fsource.close()
fdatabase.close()
The SciTE editor comes with embedded Lua scripting engine having access to the text buffer of the editor. This makes it possible to extend SciTE's functionality with tools programmed in Lua and started from the Tools Menu. One such tool available from here:
http://lua-users.org/wiki/SciteSortSelection
is a tool for sorting of selected lines in alphabetical order.
Annoying for me was/is that it doesn't sort lines containing numbers in their numerical order but like this:
1
111
2
222
3
333
where I would rather expect:
1
2
3
111
222
333
Google and Co. are not of much help here as there is to my knowledge no solution to this problem available yet online. It is also not that easy to find and deeply understand the Lua documentation for table.sort(). So the question for a knowledgeable Lua programmer would be, what is the best way to patch the existing Lua script code, so that numbers (and also lines with text in case of leading spaces) become sorted as expected and the Lua code for this task runs so fast, that even sorting of huge files (50 MByte and more) won't take much time?
Your expectation is wrong. You said the algorithm is supposed to sort the texts alphabetically and that is exactly what it does.
For Lua "11" is smaller than "2".
I think you would agree that "aa" should come befor "b" which is pretty much the same thing.
If you want to change how texts are sorted you have to provide your own function.
The Lua reference manual says:
table.sort (list [, comp])
Sorts list elements in a given order, in-place, from list[1] to
list[#list]. If comp is given, then it must be a function that
receives two list elements and returns true when the first element
must come before the second in the final order (so that, after the
sort, i < j implies not comp(list[j],list[i])). If comp is not given,
then the standard Lua operator < is used instead.
Note that the comp function must define a strict partial order over
the elements in the list; that is, it must be asymmetric and
transitive. Otherwise, no valid sort may be possible.
The sort algorithm is not stable: elements considered equal by the
given order may have their relative positions changed by the sort.
So you are free to implement your own comp function to change the sorting.
By default table.sort(list) sort list in ascending order.
To make it sort in descending order you call:
table.sort(list, function(a,b) return a > b end)
If you want to treat numbers differently you can do something like this:
t = {"111", "11", "3", "2", "a", "b"}
local function myCompare(a,b)
local a_number = tonumber(a)
local b_number = tonumber(b)
if a_number and b_number then
return a_number < b_number
end
end
table.sort(t, myCompare)
for i,v in ipairs(t) do
print(v)
end
which would give you the output
2
3
11
111
a
b
Of course this is just a quick and simple example. A nicer implementation is up to you.
Below what I finally came up with myself. It's sure a quick and dirty solution which slows the already slow
( compared to jEdit [Plugins]->[Text Tools]->[Sort Lines] or to bash command line 'sort -g' )
process of sorting huge buffers of text lines, but it is at least there for use and works as expected. For the sake of completeness here the entire section of code currently present in my Lua Startup Script for SciTE:
-- =============================================================================
-- Sort Selected Lines (available in MENU -> Tools):
-- -----------------------------------------------------------
-- Specify in .SciTEUser.properties:
-- command.name.2.*=# Sort Selected Lines '
-- command.subsystem.2.*=3
-- command.mode.2.*=savebefore:no
-- command.2.*=SortSelectedLines
-- # command.shortcut.2.*=Ctrl+2 # Ctrl+2 is DEFAULT for command.2.*
function lines(str)
local t = {}
local i, lstr = 1, #str
while i <= lstr do
local x, y = string.find(str, "\r?\n", i)
if x then t[#t + 1] = string.sub(str, i, x - 1)
else break
end
i = y + 1
end
if i <= lstr then t[#t + 1] = string.sub(str, i) end
return t
end
-- It was an annoying for me that using table.sort(buffer) in Lua
-- didn't sort numbers with leading spaces in their numerical order.
-- Using following comparison function helps to avoid that problem:
function compare(a,b)
return a:gsub(" ", "0") < b:gsub(" ", "0")
-- If 'compare' is not used ( table.sort(buf) )
-- Lua uses implicit for sorting (see Lua tutorial):
-- return a < b
-- so changing the provided return statement to this above
-- would be enough to restore sorting to how it was before
end
function SortSelectedLines()
local sel = editor:GetSelText()
if #sel == 0 then return end
local eol = string.match(sel, "\n$")
local buf = lines(sel)
table.sort(buf, compare)
--table.foreach (buf, print) --used for debugging
local out = table.concat(buf, "\n")
if eol then out = out.."\n" end
editor:ReplaceSel(out)
end
-- ---------
-- :Sort Selected Lines
-- -----------------------------------------------------------------------------
I have files that can be 19GB or greater, they will be huge but sorted. Can I use the fact that they are sorted to my advantage when searching to see if a certain string exists?
I looked at something called sgrep but not sure if its what I'm looking for. An example is I will have a 19GB text file with millions of rows of
ABCDEFG,1234,Jan 21,stackoverflow
and I want to search just the first column of these millions of row to see if ABCDEFG exists in this huge text file.
Is there a more efficient way then just greping this file for the string and seeing if a result comes. I don't even need the line, I just need almost a boolean, true/false if it is inside this file
Actually sgrep is what I was looking for. The reason I got confused was because structured grep has the same name as sorted grep and I was installing the wrong package. sgrep is amazing
I don't know if there are any utilities that would help you out if the box, but it would be pretty straight forward to write an application specific to your problem. A binary search would work well, and should yield your result within 20-30 queries against the file.
Let's say your lines are never more than 100 characters, and the file is B bytes long.
Do something like this in your favorite language:
sub file_has_line(file, target) {
a = 0
z = file.length
while (a < z) {
m = (a+z)/2
chunk = file.read(m, 200)
// That is, read 200 bytes, starting at m.
line = chunk.split(/\n/)[2]
// split the line on newlines, and keep only the second line.
if line < target
z = m - 1
else
a = m + 1
}
return (line == target)
}
If you're only doing a single lookup, this will dramatically speed up your program. Instead of reading ~20 GB, you'll be reading ~20 KB of data.
You could try to optimize this a bit by extrapolating that "Xerox" is going to be at 98% of the file and starting the midpoint there...but unless your need for optimization is quite extreme, you really won't see much difference. The binary search will get you that close within 4 or 5 passes, anyway.
If you're doing lots of lookups (I just saw your comment that you will be), I would look to pump all that data into a database where you can query at will.
So if you're doing 100,000 lookups, but this is a one-and-done process where having it in a database has no ongoing value, you could take another approach...
Sort your list of targets, to match the sort order of the log file. Then walk through each in parallel. You'll still end up reading the entire 20 GB file, but you'll only have to do it once and then you'll have all your answers. Something like this:
sub file_has_lines(file, target_array) {
target_array = target_array.sort
target = ''
hits = []
do {
if line < target
line = file.readln()
elsif line > target
target = target_array.pop()
elseif line == target
hits.push(line)
line = file.readln()
} while not file.eof()
return hits
}
As a personal challenge I'm trying to implement the SIMON block cipher in Ruby. I'm running into some issues finding the best way to work with the data. The full code related to this question is located at: https://github.com/Rami114/Personal/blob/master/Simon/Simon.rb
SIMON requires both XOR, shift and circular shift operations, the last of which is forcing me to work with BigNums so I can perform the left circular shift with math rather than a more complex/slower double loop on byte arrays.
Is there a better way to convert a string to a BigNum and back again.
String -> BigNum (where N is 64 and pt is a string of plaintext)
pt = pt.chars.each_slice(N/8).map {|x| x.join.unpack('b*')[0].to_i(2)}.to_a
So I break the string into individual characters, slice into N-sized arrays (the word size in SIMON) and unpack each set into a BigNum. That appears to work fine and I can convert it back.
Now my SIMON code is currently broken, but that's more the math I think/hope and not the code. The conversion back is (where ct is an array of bignums representing the ciphertext):
ct.map { |x| [x.to_s(2).rjust(128,'0')].pack('b*') }.join
I seem to have to right-justify pad the string as bignums are of undefined width so I have no leading 0s. Unfortunately the pack requires the defined with to have sensible output.
Is this a valid method of conversion? Is there a better way? I'm not sure on either count and hoping someone here can help out.
E: For #torimus, the circular shift implementation I'm using (From link above)
def self.lcs (bytes, block_size, shift)
((bytes << shift) | (bytes >> (block_size - shift))) & ((1<< block_size)-1)
end
If you would be equally happy with unpack('B*') with msb first binary numbers (which you could well be if all your processing is circular), then you could also use .unpack('Q>') instead of .unpack('B*')[0].to_i(2) for generating pt:
pt = "qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890!#"
# Your version (with 'B' == msb first) for comparison:
pt_nums = pt.chars.each_slice(N/8).map {|x| x.join.unpack('B*')[0].to_i(2)}.to_a
=> [8176115190769218921, 8030025283835160424, 7668342063789995618, 7957105551900562521,
6145530372635706438, 5136437062280042563, 6215616529169527604, 3834312847369707840]
# unpack to 64-bit unsigned integers directly
pt_nums = pt.unpack('Q>8')
=> [8176115190769218921, 8030025283835160424, 7668342063789995618, 7957105551900562521,
6145530372635706438, 5136437062280042563, 6215616529169527604, 3834312847369707840]
There are no native 128-bit pack/unpacks to return in the other direction, but you can use Fixnum to solve this too:
split128 = 1 << 64
ct = pt # Just to show round-trip
ct.map { |x| [ x / split128, x % split128 ].pack('Q>2') }.join
=> "\x00\x00\x00\x00\x00\x00\x00\x00qwertyui . . . " # truncated
This avoids a lot of the temporary stages on your code, but at the expense of using a different byte coding - I don't know enough about SIMON to comment whether this is adaptable to your needs.