Extract fields out of a ruby hash with special chars - ruby

I'm working to turn a pdf signature visible, using origami.pdf, and meanwhile I noticed that my signature is an hash, and so I try to capture fields like "Location", "Reason", "Date", "ContactInfo".
{/Type=>/Sig, /Contents=>"0\x82\a\xAE\x06\t*\x86H\x86\xF7\r\x01\a\x02\xA0\x82\a\
x9F0\x82\a\x9B\x02\x01\x011\v0\t\x06\x05+\x0E\x03\x02\x1A\x05\x000#\x06\t*\x86H\
x86\xF7\r\x01\a\x01\xA0\x16\x04\x14\xEF8uEn1#\x11M\x95\xE4\xD7\x9C\xFE(\xCF\xB7\
x92\x01\xC2\xA0\x82\x05\x970\x82\x05\x930\x82\x04{\xA0\x03\x02\x01\x02\x02\x04Bo
\x93\x8C0\r\x06\t*\x86H\x86\xF7\r\x01\x01\x05\x05\x000>1\v0\t\x06\x03U\x04\x06\x
13\x02pt1\x150\x13\x06\x03U\x04\n\x13\fMULTICERT-CA1\x180\x16\x06\x03U\x04\x03\x
13\x0FMULTICERT-CA 020\x1E\x17\r130320170147Z\x17\r140320164736Z0\x81\xA51\v0\t\
x06\x03U\x04\x06\x13\x02PT1\x150\x13\x06\x03U\x04\n\x13\fMULTICERT-CA1\x160\x14\
x06\x03U\x04\v\x13\rCERTIPOR - RA1\x120\x10\x06\x03U\x04\v\x13\tCorporate1 0\x1E
\x06\x03U\x04\v\x13\x17ESCRITA INTELIGENTE LDA1\x180\x16\x06\x03U\x04\v\x13\x0FW
eb Application1\x170\x15\x06\x03U\x04\x03\x13\x0ERECIBOS ONLINE0\x81\x9F0\r\x06\
t*\x86H\x86\xF7\r\x01\x01\x01\x05\x00\x03\x81\x8D\x000\x81\x89\x02\x81\x81\x00\x
AC\xCE\xA4\x06\x901\xB5x\x89lE\rw\xC8<\x13\xDDu\xC6h\xBF'b6\x8D\xB0\xA0\xB1Y\e\x
18\x00\xE5\x8C\x1A\xCD\xBB%\xDA\x15P\x1A\xF91\xF9\xF6\xBA\xE0\xF8\xF6LH\x16\x86\
xE9Y\xDE\x00Z\xEC\x82\xB3=\r2fP7\xD1\x8B\xF3k\xF7|MVb\fB\xFB\xBA\x92\xD3\xFF9\x7
F\x9D\x83w\xFE\xAB\xBA\x93G\x8F\xCE\xF0\t!d\x83\xD3F\xAC\xCCv\xCA\x10\xC9\xB8e;\
x80\xB8\xF6\xEBI\xBD\x93\x89zC\xDF\x06-\r\x9E\xD3\x02\x03\x01\x00\x01\xA3\x82\x0
2\xB30\x82\x02\xAF0\v\x06\x03U\x1D\x0F\x04\x04\x03\x02\x03\xF808\x06\b+\x06\x01\
x05\x05\a\x01\x01\x04,0*0(\x06\b+\x06\x01\x05\x05\a0\x01\x86\x1Chttp://ocsp.mult
icert.com/ca0\x81\xE0\x06\x03U\x1D \x04\x81\xD80\x81\xD50M\x06\t+\x06\x01\x04\x0
1\xB0<\n\x020#0>\x06\b+\x06\x01\x05\x05\a\x02\x01\x162http://www.multicert.com/c
ps/multicert-ca-cps.html0\x81\x83\x06\v+\x06\x01\x04\x01\xB0<\n\x02\x88\x060t0r\
x06\b+\x06\x01\x05\x05\a\x02\x020f\x1Ed\x00h\x00t\x00t\x00p\x00:\x00/\x00/\x00w\
x00w\x00w\x00.\x00m\x00u\x00l\x00t\x00i\x00c\x00e\x00r\x00t\x00.\x00c\x00o\x00m\
x00/\x00c\x00p\x00/\x00m\x00u\x00l\x00t\x00i\x00c\x00e\x00r\x00t\x00-\x00c\x00a\
x00-\x001\x000\x003\x000\x00.\x00h\x00t\x00m\x00l0\x11\x06\t`\x86H\x01\x86\xF8B\
x01\x01\x04\x04\x03\x02\x04\xB00 \x06\x03U\x1D\x11\x04\x190\x17\x81\x15info#reci
bosonline.pt0\x82\x01\x01\x06\x03U\x1D\x1F\x04\x81\xF90\x81\xF60\x81\x9A\xA0\x81
\x97\xA0\x81\x94\x86/http://www.multicert.com/ca/multicert-ca-02.crl\x86aldap://
ldap.multicert.com/cn=MULTICERT-CA%2002,o=MULTICERT-CA,c=PT?certificateRevocatio
nList?base0W\xA0U\xA0S\xA4Q0O1\v0\t\x06\x03U\x04\x06\x13\x02pt1\x150\x13\x06\x03
U\x04\n\x13\fMULTICERT-CA1\x180\x16\x06\x03U\x04\x03\x13\x0FMULTICERT-CA 021\x0F
0\r\x06\x03U\x04\x03\x13\x06CRL2950\x1F\x06\x03U\x1D#\x04\x180\x16\x80\x14\x1D\x
C3\xB9\x88\xA5\x18\xBE`\xA7,\xA6c\xCAf*\xFC\f'\xC1\xBD0\x1D\x06\x03U\x1D\x0E\x04
\x16\x04\x14\x06\xD8\x1Fr6a\x9E\xEB\x176\x9C)\x9E-t\xFF\xD080\x190\t\x06\x03U\x1
D\x13\x04\x020\x000\r\x06\t*\x86H\x86\xF7\r\x01\x01\x05\x05\x00\x03\x82\x01\x01\
x00AQ\x1F\xCD\\ua\x98\e\rT2kW\xF7\xB8|CZ\xAC\xB7\xA2\x96(\bv\x83\x13\x89*\xB1#r7
\xE9WW{\x87T\x14\xDE\x81\nA2?\x9E\nv\x8E\x9A\xC4\\\x0Ff\xAE\t<2\xC1\x14S\xC6F?\x
85o\xEFb\xE2x!\x13M\xD0\x9Fu \x80\x00\x04\x0E\x89\xA8\x14\xE60\x96#\xC5\xD0Ac\xC
0<\xFD\xE31S\x90\x8A\xC3\xDF\xCA[\x1Cf\xC3\xDC\xB8\x96D\xA3\x03\x0F\xE7\x94\xD5\
v\xD2U\xD3\x96SZz\xF2g\xC3\xA58\x14{\x93q\xD0_#\xD8\xCAH\x1A\xEB\xC7\xD7\xA7\xD9
|.\x7F\xB5\xABI\xC4\xE4UNH\x00d\x8B\xC7k\x1A\xF5a*\x1D\x93a\xD1r\bNpi\t(\xA9\x11
\xFC \x983\xC5\x06!\x9C\xF1\x86\xB6P{Y\x9EL\x0FB\xF3\xBF#\xC2\xB8\xF0\xA0x\xD0\x
1D\x9B\xF5\xFDGF\xD9rS\xEEO\xE8\xF4rH\x9B=\xC2opr\xC6Xr\x18\x82[\xB3\x06\x10t\xB
9\xC2#\xF8\x92\x8D6\xFE\xFC\x0Fp\x88\x97u,\xD9F1\x82\x01\xC70\x82\x01\xC3\x02\x0
1\x010F0>1\v0\t\x06\x03U\x04\x06\x13\x02pt1\x150\x13\x06\x03U\x04\n\x13\fMULTICE
RT-CA1\x180\x16\x06\x03U\x04\x03\x13\x0FMULTICERT-CA 02\x02\x04Bo\x93\x8C0\t\x06
\x05+\x0E\x03\x02\x1A\x05\x00\xA0\x81\xD80\x18\x06\t*\x86H\x86\xF7\r\x01\t\x031\
v\x06\t*\x86H\x86\xF7\r\x01\a\x010\x1C\x06\t*\x86H\x86\xF7\r\x01\t\x051\x0F\x17\
r130329223127Z0#\x06\t*\x86H\x86\xF7\r\x01\t\x041\x16\x04\x14\x93\xD9l\xBD68\xDB
*M\xADY\xF8\x8F<\x8E\x94m\xACS\xAE0y\x06\t*\x86H\x86\xF7\r\x01\t\x0F1l0j0\v\x06\
t`\x86H\x01e\x03\x04\x01*0\v\x06\t`\x86H\x01e\x03\x04\x01\x160\v\x06\t`\x86H\x01
e\x03\x04\x01\x020\n\x06\b*\x86H\x86\xF7\r\x03\a0\x0E\x06\b*\x86H\x86\xF7\r\x03\
x02\x02\x02\x00\x800\r\x06\b*\x86H\x86\xF7\r\x03\x02\x02\x01#0\a\x06\x05+\x0E\x0
3\x02\a0\r\x06\b*\x86H\x86\xF7\r\x03\x02\x02\x01(0\r\x06\t*\x86H\x86\xF7\r\x01\x
01\x01\x05\x00\x04\x81\x803]\xBC\xA2\xC5\x0F&\r\x94\x96\xD5\xBD\xF2\x96\xB3\x86\
x9D\x01\xA3{5\xEC\xA5\xEC\x8B=\r\xD7%w0o\x9C\x7F\v\x17YX\x80\xAF\x1A\x8F\x1E\xBB
e\xBCp4\xF7\x80\x89b&?\xCE<\xCC\x8D\xFE\xEFK\x86\x0F\xD8Q\xFFU\x04\x11E\t\xED\xC
9=WF\x93\x10w\xC6g\xD4\e`\xE5\xB5{Ax~%\xE9\x92\xF5\x01\x19\xCDS\xE1|%\"\xB2\xC6\
x107\xE9\xF7M\xD7\xA3\x11MJ\xAF\x03\x0F\xFF\x8D:s\x84g\xB6\xD5o\xAF\xB0\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
x00\x00\x00\x00\x00\x00", /Filter=>/Adobe.PPKMS, /SubFilter=>/adbe.pkcs7.sha1, /
ByteRange=>[0, 12883, 17081, 1098], /Location=>"Portugal", /ContactInfo=>"email#email.com", /Reason=>"Proof of Concept", /M=>"D:20130329223127Z00'00", /P
rop_Build=>{/Filter=>{/Name=>/Adobe.PPKMS, /R=>131101, /Date=>"2013-03-29 22:31:
27 +0000"}, /SigQ=>{/Preview=>false, /R=>131101}, /PubSec=>{/NonEFontNoWarn=>fal
se, /Date=>"2013-03-29 22:31:27 +0000", /R=>131101}, /App=>{/TrustedMode=>false,
/OS=>[/Win], /R=>458752, /Name=>/Exchange-Pro}}}
If i extract the keys (pdf.signature.keys) i get
/Type
/Contents
/Filter
/SubFilter
/ByteRange
/Location
/ContactInfo
/Reason
/M
/Prop_Build
Now, how do I reach the contents of these keys ?
I cannot simply do pdf.signature[/Location] , because ruby says it is a syntax error...
Any ideas?

I took a look at the source for origami-pdf, and it seems that the / prepending every key in that output is generated on-the-fly from Origami::Name.to_s. Also looking at its eql? definition it seems like it just compares it to the underlying string value. So this should work, have you tried it?
signature[Origami::Name.new('Location')]

As generating an Origami::Name object with /Location seems to be so hard, I solved this with a different approach :
location = pdf.signature[pdf.signature.keys[5]]
The Output returns Portugal, and this is the approach I will take, since the array's keys position are supposed to be static.
I would appreciate a more elegant solution though

Related

With ruamel.yaml how can I conditionally convert flow maps to block maps based on line length?

I'm working on a ruamel.yaml (v0.17.4) based YAML reformatter (using the RoundTrip variant to preserve comments).
I want to allow a mix of block- and flow-style maps, but in some cases, I want to convert a flow-style map to use block-style.
In particular, if the flow-style map would be longer than the max line length^, I want to convert that to a block-style map instead of wrapping the line somewhere in the middle of the flow-style map.
^ By "max line length" I mean the best_width that I configure by setting something like yaml.width = 120 where yaml is a ruamel.yaml.YAML instance.
What should I extend to achieve this? The emitter is where the line-length gets calculated so wrapping can occur, but I suspect that is too late to convert between block- and flow-style. I'm also concerned about losing comments when I switch the styles. Here are some possible extension points, can you give me a pointer on where I'm most likely to have success with this?
Emitter.expect_flow_mapping() probably too late for converting flow->block
Serializer.serialize_node() probably too late as it consults node.flow_style
RoundTripRepresenter.represent_mapping() maybe? but this has no idea about line length
I could also walk the data before calling yaml.dump(), but this has no idea about line length.
So, where should I and where can I adjust the flow_style whether a flow-style map would trigger line wrapping?
What I think the most accurate approach is when you encounter a flow-style mapping in the dumping process is to first try to emit it to a buffer and then get the length of the buffer and if that combined with the column that you are in, actually emit block-style.
Any attempt to guesstimate the length of the output without actually trying to write that part of a tree is going to be hard, if not impossible to do without doing the actual emit. Among other things the dumping process actually dumps scalars and reads them back to make sure no quoting needs to be forced (e.g. when you dump a string that reads back like a date). It also handles single key-value pairs in a list in a special way ( [1, a: 42, 3] instead of the more verbose [1, {a: 42}, 3]. So a simple calculation of the length of the scalars that are the keys and values and separating comma, colon and spaces is not going to be precise.
A different approach is to dump your data with a large line width and parse the output and make a set of line numbers for which the line is too long according to the width that you actually want to use. After loading that output back you can walk over the data structure recursively, inspect the .lc attribute to determine the line number on which a flow style mapping (or sequence) started and if that line number is in the set you built beforehand change the mapping to block style. If you have nested flow-style collections, you might have to repeat this process.
If you run the following, the initial dumped value for quote will be on one line.
The change_to_block method as presented changes all mappings/sequences that are too long
that are on one line.
import sys
import ruamel.yaml
yaml_str = """\
movie: bladerunner
quote: {[Batty, Roy]: [
I have seen things you people wouldn't believe.,
Attack ships on fire off the shoulder of Orion.,
I watched C-beams glitter in the dark near the Tannhäuser Gate.,
]}
"""
class Blockify:
def __init__(self, width, only_first=False, verbose=0):
self._width = width
self._yaml = None
self._only_first = only_first
self._verbose = verbose
#property
def yaml(self):
if self._yaml is None:
self._yaml = y = ruamel.yaml.YAML(typ=['rt', 'string'])
y.preserve_quotes = True
y.width = 2**16
return self._yaml
def __call__(self, d):
pass_nr = 0
changed = [True]
while changed[0]:
changed[0] = False
try:
s = self.yaml.dumps(d)
except AttributeError:
print("use 'pip install ruamel.yaml.string' to install plugin that gives 'dumps' to string")
sys.exit(1)
if self._verbose > 1:
print(s)
too_long = set()
max_ll = -1
for line_nr, line in enumerate(s.splitlines()):
if len(line) > self._width:
too_long.add(line_nr)
if len(line) > max_ll:
max_ll = len(line)
if self._verbose > 0:
print(f'pass: {pass_nr}, lines: {sorted(too_long)}, longest: {max_ll}')
sys.stdout.flush()
new_d = self.yaml.load(s)
self.change_to_block(new_d, too_long, changed, only_first=self._only_first)
d = new_d
pass_nr += 1
return d, s
#staticmethod
def change_to_block(d, too_long, changed, only_first):
if isinstance(d, dict):
if d.fa.flow_style() and d.lc.line in too_long:
d.fa.set_block_style()
changed[0] = True
return # don't convert nested flow styles, might not be necessary
# don't change keys if any value is changed
for v in d.values():
Blockify.change_to_block(v, too_long, changed, only_first)
if only_first and changed[0]:
return
if changed[0]: # don't change keys if value has changed
return
for k in d:
Blockify.change_to_block(k, too_long, changed, only_first)
if only_first and changed[0]:
return
if isinstance(d, (list, tuple)):
if d.fa.flow_style() and d.lc.line in too_long:
d.fa.set_block_style()
changed[0] = True
return # don't convert nested flow styles, might not be necessary
for elem in d:
Blockify.change_to_block(elem, too_long, changed, only_first)
if only_first and changed[0]:
return
blockify = Blockify(96, verbose=2) # set verbose to 0, to suppress progress output
yaml = ruamel.yaml.YAML(typ=['rt', 'string'])
data = yaml.load(yaml_str)
blockified_data, string_output = blockify(data)
print('-'*32, 'result:', '-'*32)
print(string_output) # string_output has no final newline
which gives:
movie: bladerunner
quote: {[Batty, Roy]: [I have seen things you people wouldn't believe., Attack ships on fire off the shoulder of Orion., I watched C-beams glitter in the dark near the Tannhäuser Gate.]}
pass: 0, lines: [1], longest: 186
movie: bladerunner
quote:
[Batty, Roy]: [I have seen things you people wouldn't believe., Attack ships on fire off the shoulder of Orion., I watched C-beams glitter in the dark near the Tannhäuser Gate.]
pass: 1, lines: [2], longest: 179
movie: bladerunner
quote:
[Batty, Roy]:
- I have seen things you people wouldn't believe.
- Attack ships on fire off the shoulder of Orion.
- I watched C-beams glitter in the dark near the Tannhäuser Gate.
pass: 2, lines: [], longest: 67
-------------------------------- result: --------------------------------
movie: bladerunner
quote:
[Batty, Roy]:
- I have seen things you people wouldn't believe.
- Attack ships on fire off the shoulder of Orion.
- I watched C-beams glitter in the dark near the Tannhäuser Gate.
Please note that when using ruamel.yaml<0.18 the sequence [Batty, Roy] never will be in block style
because the tuple subclass CommentedKeySeq does never get a line number attached.

`bytes.fromhex` and `to_bytes` method in Raku?

I have a Python3 function that combine two bytes, one use bytes.fromhex() method, and the other use to_bytes() method:
from datatime import datetime
def bytes_add() -> bytes:
bytes_a = bytes.fromhex('6812')
bytes_b = datetime.now().month.to_bytes(1, byteorder='little', signed=False)
return bytes_a + bytes_b
Is it possible to write a same function as above in Raku?(if so, How to control byteorder and signed params?)
as for byteorder, say convert number 1024 to bytes in Python:
(1024).to_bytes(2, byteorder='little') # Output: b'\x00\x04', byte 00 is before byte 04
as a contrast, convert number 1024 to Buf or Blob in Raku:
buf16.new(1024) # Output: Buf[uint16]:0x<0400>, byte 00 is after byte 04
is there any way to get Buf[uint16]:0x<0004> in the above example in Raku?
Update:
inspired by codesections, I try to figure out a solution similar to codesections's answer:
sub bytes_add() {
my $bytes_a = pack("H*", '6812');
my $bytes_b = buf16.new(DateTime.now.month);
$bytes_a ~ $bytes_b;
}
But still don't know how to use byteorder.
Is it possible to write a same function as above in Raku?
Yes. I'm not 100% sure I understand the overall goal of the function you provided, but a literal/line-by-line translation is certainly possible. If you would like to elaborate on the goal, it may also be possible to achieve the same goal in an easier/more idiomatic way.
Here's the line-by-line translation:
sub bytes-add(--> Blob) {
my $bytes-a = Blob(<68 12>);
my $bytes-b = Blob(DateTime.now.month);
Blob(|$bytes-a, |$bytes-b)
}
The output of bytes-add is printed by default using its hexadecimal representation (Blob:0x<44 0C 09>). If you'd like to print it more like Python prints its byte literals, you can do so with bytes-add».chr.raku, which prints as ("D", "\x[C]", "\t").
if so, How to control byteorder?
Because the code above constructs the Blob from a List, you can simply .reverse the list to use the opposite order.

dask histogram from zarr file (a big zarr file)

So heres my question, I have a big 3dim array which is 100GB in size as a #zarr file (the array is more than twice the size). I have tried using the histogram from #Dask to calculate but I get an error saying that it cant do it because the file has tuples within tuples. Im guessing thats the zarr file formate rather than anything else?
any thoughts?
edit:
yes the bigger computer thing wouldnt actually work...
Im running a dask client on a single machine, it runsthe calculation but just gets stuck somewhere.
I just tried dask.map function across the file but when I plot it out I get something like this:
ValueError: setting an array element with a sequence.
heres a version of the script:
def histo(img):
return da.histogram(img, bins=255, range=[0, 255])
histo_1 = da.map_blocks(histo, fimg)
I am actually going to try and use it out side of the map function. I wonder rather than the map funtion, does the windowing from map blocks, actually cause the issue. well, ill let you know if it is or now....
edit 2
So I tried to remove the map blocks function as suggested and this was my result:
[in] h, bins =da.histogram(fused_crop, bins=255, range=[0, 255])
[in] bins
[out] array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.,
11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21.,
22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32.,
33., 34., 35., 36., 37., 38., 39., 40., 41., 42., 43.,
44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54.,
55., 56., 57., 58., 59., 60., 61., 62., 63., 64., 65.,
66., 67., 68., 69., 70., 71., 72., 73., 74., 75., 76.,
77., 78., 79., 80., 81., 82., 83., 84., 85., 86., 87.,
88., 89., 90., 91., 92., 93., 94., 95., 96., 97., 98.,
99., 100., 101., 102., 103., 104., 105., 106., 107., 108., 109.,
110., 111., 112., 113., 114., 115., 116., 117., 118., 119., 120.,
121., 122., 123., 124., 125., 126., 127., 128., 129., 130., 131.,
132., 133., 134., 135., 136., 137., 138., 139., 140., 141., 142.,
143., 144., 145., 146., 147., 148., 149., 150., 151., 152., 153.,
154., 155., 156., 157., 158., 159., 160., 161., 162., 163., 164.,
165., 166., 167., 168., 169., 170., 171., 172., 173., 174., 175.,
176., 177., 178., 179., 180., 181., 182., 183., 184., 185., 186.,
187., 188., 189., 190., 191., 192., 193., 194., 195., 196., 197.,
198., 199., 200., 201., 202., 203., 204., 205., 206., 207., 208.,
209., 210., 211., 212., 213., 214., 215., 216., 217., 218., 219.,
220., 221., 222., 223., 224., 225., 226., 227., 228., 229., 230.,
231., 232., 233., 234., 235., 236., 237., 238., 239., 240., 241.,
242., 243., 244., 245., 246., 247., 248., 249., 250., 251., 252.,
253., 254., 255.])
[in] h.compute
[out] <bound method DaskMethodsMixin.compute of dask.array<sum-aggregate, shape=(255,), dtype=int64, chunksize=(255,), chunktype=numpy.ndarray>>
im going to try in another notebook and see if it still occurs.
edit 3
its the stranges thing, but if I just declare the variable h, it comes out as one small element from the dask array?
edit
Strange, if i call the xarray.hist or the da.hist function, they both fall over. If I use the skimage.exposure.histogram it works but it appears that the zarr file is unpacked before the histogram is a calculated. Which is a bit of a problem...
Update 7th June 2020 (with a solution for not big but annoyingly medium data) see below for answer.
You probably want to use dask's function for this rather than map_blocks. For the latter, Dask expects the output of each call to be the same size as the input block, or a shape derived from the input block, instead of the one-dimensional fixed-size output of histogram.
h, bins =da.histogram(fused_crop, bins=255, range=[0, 255])
h.compute()
Update 7th June 2020 (with a solution for not big but annoyingly medium data):
So unfortunately I got a bit ill around this time and it took a while for me to feel a bit better. Then the pandemic happened and I was on full childcare duty. I tried lots of different option and what ultimately, this looked like was that the following:
1) if just using x.compute, the memory would very quickly fill up.
2) Using distributed would fill the hard drive with spill to disk and take hours but would hang and crash and not do anything because...it would compute (im guessing here but based on the graph and dask api) it would create a sub histogram array for every chunk... that would all need to be merged at some point.
3) The chunking of my data was sub optimal so the amount of tasks was massive but even then I couldn't compute a histogram when i improved the chunking.
In the end I looked for a dynamic way of updating the histogram data. So I used Zarr to do it, by computing to it. Since it allows conccurrent reads and writing functions. As a reminder : my data is a zarr array in 3 dims x,y,z and uncompressed 300GB but compressed it's about 100GB. On my 4 yr old laptop with 16GB of ram using the following worked (I should have said my data was 16 bit unsigned:
imgs = da.from_zarr(.....)
imgs2 = imgs.rechunk((a,b,c)) ## individual chunk dim per dim
h, bins = da.histogram(imgs2, bins = 255, range=[0, 65535]) # binning to 256
h_out = da.to_zarr(h, "histogram.zarr")
I ran the progress bar alongside the process and to get a histogram from file took :
[########################################] | 100% Completed | 18min 47.3s
Which I dont think is too bad for a 300GB array. Hopefully this helps someone else as well, thanks for the help earlier in the year #mdurant.

Remove duplicate items from a hash in Ruby

I am using the Twitter Gem to access the Twitter API and I'd like to create a variable that only stores mentions that are unique, based on the text of the mention.
Right now, I'm storing all mentions like so: #allmentions = Twitter.mentions_timeline
This is an example of a mention returned for #allmentions[0]
=> #<Twitter::Tweet:0x007fbffb59ab88 #attrs={:created_at=>"Mon Dec 10 01:28:11 +0000 2012", :id=>277947788216639488, :id_str=>"277947788216639488", :text=>"#person hi", :source=>"web", :truncated=>false, :in_reply_to_status_id=>nil, :in_reply_to_status_id_str=>nil, :in_reply_to_user_id=>11739102, :in_reply_to_user_id_str=>"11739102", :in_reply_to_screen_name=>"person", :user=>{:id=>1000628702, :id_str=>"1000628702", :name=>"test account", :screen_name=>"testaccountso", :location=>"", :description=>"", :url=>nil, :entities=>{:description=>{:urls=>[]}}, :protected=>false, :followers_count=>0, :friends_count=>0, :listed_count=>0, :created_at=>"Mon Dec 10 01:27:39 +0000 2012", :favourites_count=>0, :utc_offset=>nil, :time_zone=>nil, :geo_enabled=>false, :verified=>false, :statuses_count=>1, :lang=>"en", :contributors_enabled=>false, :is_translator=>false, :profile_background_color=>"C0DEED", :profile_background_image_url=>"http://a0.twimg.com/images/themes/theme1/bg.png", :profile_background_image_url_https=>"https://si0.twimg.com/images/themes/theme1/bg.png", :profile_background_tile=>false, :profile_image_url=>"http://a0.twimg.com/sticky/default_profile_images/default_profile_3_normal.png", :profile_image_url_https=>"https://si0.twimg.com/sticky/default_profile_images/default_profile_3_normal.png", :profile_link_color=>"0084B4", :profile_sidebar_border_color=>"C0DEED", :profile_sidebar_fill_color=>"DDEEF6", :profile_text_color=>"333333", :profile_use_background_image=>true, :default_profile=>true, :default_profile_image=>true, :following=>nil, :follow_request_sent=>false, :notifications=>nil}, :geo=>nil, :coordinates=>nil, :place=>nil, :contributors=>nil, :retweet_count=>0, :entities=>{:hashtags=>[], :urls=>[], :user_mentions=>[{:screen_name=>"person", :name=>"Person", :id=>1173910, :id_str=>"1173910", :indices=>[0, 6]}]}, :favorited=>false, :retweeted=>false}>
I can access the text of the mention like so: #allmentions[0].text
Is there a built-in ruby method (or an easy way) to let me store only the mentions that have a unique value in the text attribute?
Yes, you can call uniq with a block.
For example:
#allmentions.uniq {|m| m.text}
To answer my own question, I did a bit of research, and it seems like this would work:
no_dupes = $allmentions.uniq { |h| h[:text] }

text processing for IPv4 dotted decimal notation conversion to /8 or /16 format

I have an input file that contains a list of ip addresses and the ip_counts(some parameter that I use internally.)The file looks somewhat like this.
202.124.127.26 2135869
202.124.127.25 2111217
202.124.127.17 2058082
202.124.127.16 2014958
202.124.127.20 1949323
202.124.127.24 1933773
202.124.127.27 1932076
202.124.127.22 1886466
202.124.127.18 1882955
202.124.127.21 1803528
202.124.127.23 1786348
119.224.129.200 1776592
119.224.129.211 1639325
202.124.127.19 1479198
119.224.129.201 1145426
202.49.175.110 1133354
119.224.129.210 1119525
68.232.45.132 1085491
119.224.129.209 1015078
131.203.3.8 857951
202.162.73.4 817197
207.123.58.125 785326
202.7.6.18 762603
117.121.253.254 718022
74.125.237.120 710448
68.232.44.219 693002
202.162.73.2 671559
205.128.75.126 611301
119.161.91.17 604393
119.224.129.202 559930
8.27.241.126 528862
74.125.237.152 517516
8.254.9.254 514341
As you can see the ip addresses themselves are unsorted.So I use the sort command on the file to sort the ip addresses as below
cat address_count.txt | sort -t . -k 1,1n -k 2,2n -k 3,3n -k 4,4n > sorted_address.txt
Which gives me an output with ip addresses in the sorted order.The partial output of that file is shown below.
4.23.63.126 15731
4.26.254.254 320705
4.27.8.254 25174
8.12.129.50 176141
8.12.223.125 11800
8.19.32.65 15854
8.19.240.53 11013
8.19.240.70 11915
8.19.240.72 31541
8.19.240.73 23304
8.20.213.28 96434
8.20.213.32 108191
8.20.213.34 170058
8.20.213.39 23512
8.20.213.41 10420
8.20.213.61 24809
8.26.195.253 28568
8.27.152.253 104446
8.27.233.125 115856
8.27.235.126 16102
8.27.235.254 25628
8.27.238.254 108485
8.27.240.125 169262
8.27.241.126 528862
8.27.241.252 197302
8.27.248.125 14926
8.254.9.254 514341
12.129.210.71 89663
15.192.45.21 20139
15.192.45.26 35265
15.193.0.148 10313
15.193.113.29 40318
15.201.49.136 14243
15.240.238.52 57163
17.250.248.95 28166
23.33.125.13 19179
23.33.125.37 17953
31.151.163.60 72709
38.99.42.37 192356
38.99.68.180 41251
38.99.68.181 10272
38.104.237.74 74012
38.108.112.103 37034
38.108.112.115 69698
38.108.112.121 92173
38.108.112.122 99230
38.112.63.238 39958
38.119.130.62 42159
46.4.28.22 19769
Now I want to parse the file given above and convert it to aaa.bbb.ccc.0/8 format and
aaa.bbb.0.0/16 format and I also want to count the number of occurences of the ip's in each subnet.I want to do this using bash.I am open to using sed or awk.How do I achieve this.
For example
8.19.240.53 11013
8.19.240.70 11915
8.19.240.72 31541
8.19.240.73 23304
8.20.213.28 96434
8.20.213.32 108191
8.20.213.34 170058
8.20.213.39 23512
8.20.213.41 10420
8.20.213.61 24809
The about input portion should produce 8.19.240.0/8 and 8.20.213.0/8 and similarly for /16 domains.I also want to count the occurences of machines in the subnet.
For example In the above output this subnet should have the count 4 in the next column beside it.It should also add the already displayed count.i.e (11013 + 11915 + 31541 + 23304) in another column.
8.19.240.0/8 4 (11013 + 11915 + 31541 + 23304)
8.20.213.0/8 6 (96434 + 108191 + 170058 + 23512 + 10420 + 24809)
It would be great if someone could suggest some way to achieve this.
The main problem here is that without having the routing table from the individual moments the packets arrived, you have no idea what netblock they were originally in. Sure, you can put them in the class-full blocks they would be in, in a class-full routing situation, but all that will give you is a nice presentation (and, admittedly, a shorter file).
Furthermore, your example looks a bit broken. You have a bunch of IP addresses in 8.0.0.0/8 and you are aggregating them into what looks like /24 routes and presenting them with a /8 at the end.
Nonetheless, in awk you can use sub() to do text replacement (or you can use index to find occurrences of ., or you can use split to split at dots). It should be relatively easy to go from that to "drop last digit, add the string "0/24" and use that as a key to update an IP-count and a hit-count dictionary, then drop the last two octets and the slash, replace with "0.0/16" and do the same" (all arrays in awk are associative arrays, so essentially dicts). No need to sort in advance, when you loop through the result, you'll get the keys in a random order, but on average there will be fewer of them, so sorting afterwards will be cheaper.
I seem to not have an awk at hand, so I cannot give you a code example.
This might work for you:
awk '{a=$1;sub(/\.[^.]*$/,"",a);ac[a]++;at[a]+=$2};END{for(x in ac)print x".0/8",ac[x],at[x]}' file
This prints the '0/8 addresses to get the 0/16 duplicate the code i.e. b=a;sub(/\.[^.]*$/,"",b);ba[b]++ etc, etc.

Resources