I'm using Mahout 0.9 (installed on HDP 2.2) for topic discovery (Latent Drichlet Allocation algorithm). I have my text file stored in directory
inputraw and executed the following commands in order
command #1:
mahout seqdirectory -i inputraw -o output-directory -c UTF-8
command #2:
mahout seq2sparse -i output-directory -o output-vector-str -wt tf -ng 3 --maxDFPercent 40 -ow -nv
command #3:
mahout rowid -i output-vector-str/tf-vectors/ -o output-vector-int
command #4:
mahout cvb -i output-vector-int/matrix -o output-topics -k 1 -mt output-tmp -x 10 -dict output-vector-str/dictionary.file-0
After executing the second command and as expected it creates a bunch of subfolders and files under the
output-vector-str (named df-count, dictionary.file-0, frequency.file-0, tf-vectors,tokenized-documents and wordcount). The size of these files all looks ok considering the size of my input file however the file under ``tf-vectors` has a very small size, in fact it's only 118 bytes).
Apparently as the
`tf-vectors` is the input to the 3rd command, the third command also generates a file of small size. Does anyone know:
what is the reason of the file under
`tf-vectors` folder to be that small? There must be something wrong.
Starting from the first command, all the generated files have a strange coding and are nor human readable. Is this something expected?
Your answers are as follows:
what is the reason of the file under tf-vectors folder to be that small?
The vectors are small considering you have given maxdf percentage to be only 40%, implying that only terms which have a doc freq(percentage freq of terms occurring throughout the docs) of less than 40% would be taken in consideration. In other words, only terms which occur in 40% of the documents or less would be taken in consideration while generating vectors.
what is the reason of the file under tf-vectors folder to be that small?
There is a command in mahout called the mahout seqdumper which would come to your rescue for dumping the files in "sequential" format to "human" readable format.
Good Luck!!
Related
I seem to be having trouble properly combining thousands of netCDF files (42000+) (3gb in size, for this particular folder/variable). The main variable that i want to combine has a structure of (6, 127, 118) i.e (time,lat,lon)
Im appending each file 1 by 1 since the number of files is too long.
I have tried:
for i in input_source/**/**/*.nc; do ncrcat -A -h append_output.nc $i append_output.nc ; done
but this method seems to be really slow (order of kb/s and seems to be getting slower as more files are appended) and is also giving a warning:
ncrcat: WARNING Intra-file non-monotonicity. Record coordinate "forecast_period" does not monotonically increase between (input file file1.nc record indices: 17, 18) (output file file1.nc record indices 17, 18) record coordinate values 6.000000, 1.000000
that basically just increases the variable "forecast_period" 1-6 n-times. n = 42000files. i.e. [1,2,3,4,5,6,1,2,3,4,5,6......n]
And despite this warning i can still open the file and ncrcat does what its supposed to, it is just slow, at-least for this particular method
I have also tried adding in the option:
--no_tmp_fl
but this gives an eror:
ERROR: nco__open() unable to open file "append_output.nc"
full error attached below
If it helps, im using wsl and ubuntu in windows 10.
Im new to bash and any comments would be much appreciated.
Either of these commands should work:
ncrcat --no_tmp_fl -h *.nc
or
ls input_source/**/**/*.nc | ncrcat --no_tmp_fl -h append_output.nc
Your original command is slow because you open and close the output files N times. These commands open it once, fill-it up, then close it.
I would use CDO for this task. Given the huge number of files it is recommended to first sort them on time (assuming you want to merge them along the time axis). After that, you can use
cdo cat *.nc outfile
Getting "Input / output error" when trying work with files in mounted HDFS NFS Gateway. This is despite having set dfs.namenode.accesstime.precision=3600000 in Ambari. For example, doing something like...
$ hdfs dfs -cat /hdfs/path/to/some/tsv/file | sed -e "s/$NULL_WITH_TAB/$TAB/g" | hadoop fs -put -f - /hdfs/path/to/some/tsv/file
$ echo -e "Lines containing null (expect zero): $(grep -c "\tnull\t" /nfs/hdfs/path/to/some/tsv/file)"
when trying to remove nulls from a tsv then inspect for nulls in that tsv based on the NFS location throws the error, but I am seeing it in many other places (again, already have dfs.namenode.accesstime.precision=3600000). Anyone have any ideas why this may be happening or debugging suggestions? Can anyone explain what exactly "access time" is in this context?
From discussion on the apache hadoop mailing list:
I think access time refers to the POSIX atime attribute for files, the “time of last access” as described here for instance (https://www.unixtutorial.org/atime-ctime-mtime-in-unix-filesystems). While HDFS keeps a correct modification time (mtime), which is important, easy and cheap, it only keeps a very low-resolution sense of last access time, which is less important, and expensive to monitor and record, as described here (https://issues.apache.org/jira/browse/HADOOP-1869) and here (https://superuser.com/questions/464290/why-is-cat-not-changing-the-access-time).
However, to have a conforming NFS api, you must present atime, and so the HDFS NFS implementation does. But first you have to configure it on. [...] many sites have been advised to turn it off entirely by setting it to zero, to improve HDFS overall performance. See for example here ( https://community.hortonworks.com/articles/43861/scaling-the-hdfs-namenode-part-4-avoiding-performa.html, section "Don’t let Reads become Writes”). So if your site has turned off atime in HDFS, you will need to turn it back on to fully enable NFS. Alternatively, you can maintain optimum efficiency by mounting NFS with the “noatime” option, as described in the document you reference.
[...] check under /var/log, eg with find /var/log -name ‘*nfs3*’ -print
I use a bash script to process a bunch of images for a timelapse movie. The method is called shutter drag, and i am creating a moving average for all images. The following script works fine:
#! /bin/bash
totnum=10000
seqnum=40
skip=1
num=$(((totnum-seqnum)/1))
i=1
j=1
while [ $i -le $num ]; do
echo $i
i1=$i
i2=$((i+1))
i3=$((i+2))
i4=$((i+3))
i5=$((i+4))
...
i37=$((i+36))
i38=$((i+37))
i39=$((i+38))
i40=$((i+39))
convert $i1.jpg $i2.jpg $i3.jpg $i4.jpg $i5.jpg ... \
$i37.jpg $i38.jpg $i39.jpg $i40.jpg \
-evaluate-sequence mean ~/timelapse/Images/Shutterdrag/$j.jpg
i=$((i+$skip))
j=$((j+1))
done
However, i noticed that this script takes a very long time to process a lot of images with a large average window (1s per image). I guess, this is caused by a lot of reading and writing in the background.
Is it possible to increase the speed of this script? For example by storing the images in the memory, and with every iteration deleting the first, and loading the last image only.
I discovered the mpr:{label} function of imagemagick, but i guess this is not the right approach, as the memory is cleared after the convert command?
Suggestion 1 - RAMdisk
If you want to put all your files on a RAMdisk before you start, it should help the I/O speed enormously.
So, to make a 1GB RAMdisk, use:
sudo mkdir /RAMdisk
sudo mount -t tmpfs -o size=1024m tmpfs /RAMdisk
Suggestion 2 - Use MPC format
So, assuming you have done the previous step, convert all your JPEGs to MPC format files on the RAMdisk. The MPC file can be dma'ed straight into memory without your CPU needing to do costly JPEG decoding as MPC is just the same format as ImageMagick uses in memory, but on-disk.
I would do that with GNU Parallel like this:
parallel -X mogrify -path /RAMdisk -fmt MPC ::: *.jpg
The -X passes as many files as possible to mogrify without creating loads of convert processes. The -path says where the output files must go. The -fmt MPC makes mogrify convert the input files to MPC format (Magick Pixel Cache) files which your subsequent convert commands in the loop can read by pure DMA rather than expensive JPEG decoding.
If you don't have, or don't like, GNU Parallel, just omit the leading parallel -X and the :::.
Suggestion 3 - Use GNU Parallel
You could also run #chepner's code in parallel...
for ...; do
echo convert ...
done | parallel
Essentially, I am echoing all the commands instead of running them and the list of echoed commands is then run by GNU Parallel. This could be especially useful if you cannot compile ImageMagick with OpenMP as Eric suggested.
You can play around with switches such as --eta after parallel to see how long it will take to finish, or --progress. Also, experiment with -j 2 or -j4 depending how big your machine is.
I did some benchmarks, just for fun. First, I made 250 JPEG images of random noise at 640x480, and ran chepner's code "as-is" - that took 2 minutes 27 seconds.
Then, I used the same set of images, but changed the loop to this:
for ((i=1, j=1; i <= num; i+=skip, j+=1)); do
echo convert "${files[#]:i:seqnum}" -evaluate-sequence mean ~/timelapse/Images/Shutterdrag/$j.jpg
done | parallel
The time went down to 35 seconds.
Then I put the loop back how it was, and changed all the input files to MPC instead of JPEG, the time went down to 36 seconds.
Finally, I used MPC format and GNU Parallel as above and the time dropped to 19 seconds.
I didn't use a RAMdisk as I am on a different OS from you (and have extremely fast NVME disks), but that should help you enormously too. You could write your output files to RAMdisk too, and also in MPC format.
Good luck and let us know how you get on please!
There is nothing you can do in bash to speed this up; everything except the actual IO that convert has to do is pretty trivial. However, you can simplify the script greatly:
#! /bin/bash
totnum=10000
seqnum=40
skip=1
num=$(((totnum-seqnum)/1))
# Could use files=(*.jpg), but they probably won't be sorted correctly
for ((i=1; i<=totnum; i++)); do
files+=($i.jpg)
done
for ((i=1, j=1; i <= num; i+=skip, j+=1)); do
convert "${files[#]:i:seqnum}" -evaluate-sequence mean ~/timelapse/Images/Shutterdrag/$j.jpg
done
Storing the files in a RAM disk would certainly help, but that's beyond the scope of this site. (Of course, if you have enough RAM, the OS should probably be keeping a file in disk cache after it is read the first time so that subsequent reads are much faster without having to preload a RAM disk.)
I am trying to classify binary data. In the data file, class [0,1] is converted to [-1,1]. Data has 21 features. All features are categorical. I am using neural network for training. The training command is:
vw -d train.vw --cache_file data --passes 5 -q sd -q ad -q do -q fd --binary -f model --nn 22
I create raw prediction file as:
vw -d test.vw -t -i neuralmodel -r raw.txt
And normal prediction file as:
vw -d test.vw -t -i neuralmodel -p out.txt
First five lines of raw file are:
0:-0.861075,-0.696812 1:-0.841357,-0.686527 2:0.796014,0.661809 3:1.06953,0.789289 4:-1.23823,-0.844951 5:0.886767,0.709793 6:2.02206,0.965555 7:-2.40753,-0.983917 8:-1.09056,-0.797075 9:1.22141,0.84007 10:2.69466,0.990912 11:2.64134,0.989894 12:-2.33309,-0.981359 13:-1.61462,-0.923839 14:1.54888,0.913601 15:3.26275,0.995055 16:2.17991,0.974762 17:0.750114,0.635229 18:2.91698,0.994164 19:1.15909,0.820746 20:-0.485593,-0.450708 21:2.00432,0.964333 -0.496912
0:-1.36519,-0.877588 1:-2.83699,-0.993155 2:-0.257558,-0.251996 3:-2.12969,-0.97213 4:-2.29878,-0.980048 5:2.70791,0.991148 6:1.31337,0.865131 7:-2.00127,-0.964116 8:-2.14167,-0.972782 9:2.50633,0.986782 10:-1.09253,-0.797788 11:2.29477,0.97989 12:-1.67385,-0.932057 13:-0.740598,-0.629493 14:0.829695,0.680313 15:3.31954,0.995055 16:3.44069,0.995055 17:2.48612,0.986241 18:1.32241,0.867388 19:1.97189,0.961987 20:1.19584,0.832381 21:1.65151,0.929067 -0.588528
0:0.908454,0.72039 1:-2.48134,-0.986108 2:-0.557337,-0.505996 3:-2.15072,-0.973263 4:-1.77706,-0.944375 5:0.202272,0.199557 6:2.37479,0.982839 7:-1.97478,-0.962201 8:-1.78124,-0.944825 9:1.94016,0.959547 10:-1.67845,-0.932657 11:2.54895,0.987855 12:-1.60502,-0.92242 13:-2.32369,-0.981008 14:1.59895,0.921511 15:2.02658,0.96586 16:2.55443,0.987987 17:3.47049,0.995055 18:1.92482,0.958313 19:1.47773,0.901044 20:-3.60913,-0.995055 21:3.56413,0.995055 -0.809399
0:-2.11677,-0.971411 1:-1.32759,-0.868656 2:2.59003,0.988807 3:-0.198721,-0.196146 4:-2.51631,-0.987041 5:0.258549,0.252956 6:1.60134,0.921871 7:-2.28731,-0.97959 8:-2.89953,-0.993958 9:-0.0972349,-0.0969177 10:3.1409,0.995055 11:1.62083,0.924746 12:-2.30097,-0.980134 13:-2.05674,-0.967824 14:1.6744,0.932135 15:1.85612,0.952319 16:2.7231,0.991412 17:1.97199,0.961995 18:3.47125,0.995055 19:0.603527,0.539567 20:1.25539,0.84979 21:2.15267,0.973368 -0.494474
0:-2.21583,-0.97649 1:-2.16823,-0.974171 2:2.00711,0.964528 3:-1.84079,-0.95087 4:-1.27159,-0.854227 5:-0.0841799,-0.0839635 6:2.24566,0.977836 7:-2.19458,-0.975482 8:-2.42779,-0.98455 9:0.39883,0.378965 10:1.32133,0.86712 11:1.87572,0.95411 12:-2.22585,-0.976951 13:-2.04512,-0.96708 14:1.52652,0.909827 15:1.98228,0.962755 16:2.37265,0.982766 17:1.73726,0.939908 18:2.315,0.980679 19:-0.08135,-0.081154 20:1.39248,0.883717 21:1.5889,0.919981 -0.389856
First five lines of (normal) prediction file are:
-0.496912
-0.588528
-0.809399
-0.494474
-0.389856
I have tallied this (normal) output with raw output. I notice that the (last or) ending float value in each of the five raw lines is the same as above.
I would please like to understand the raw output as also the normal output. That each line holds 22 pairs of values is something to do with 22 neurons? How to interpret the output as [-1,1] and why a sigmoid function is needed to convert either of the above to probabilities. Will be grateful for help.
For binary classification, you should use a suitable loss function (--loss_function=logistic or --loss_function=hinge). The --binary switch just makes sure that the reported loss is the 0/1 loss (but you cannot optimize for 0/1 loss directly, the default loss function is --loss_function=squared).
I recommend trying the --nn as one of the last steps when tuning the VW parameters. Usually, it improves the results only a little bit and the optimal number of units in the hidden layer is quite small (--nn 1, --nn 2 or --nn 3). You can also try adding a direct connections between the input and output layer with --inpass.
Note that --nn uses always tanh as the sigmoid function for the hidden layer and only one hidden layer is possible (it is hardcoded in nn.cc).
If you want to get probabilities (real number from [0,1]), use vw -d test.vw -t -i neuralmodel --link=logistic -p probabilities.txt. If you want the output to a be real number from [-1,1], use --link=glf1.
Without --link and --binary, the --pred output are the internal predictions (in range [-50, 50] when logistic or hinge loss function is used).
As for the --nn --raw question, your guess is correct:
The 22 pairs of numbers correspond to the 22 neurons and the last number is the final (internal) prediction. My guess is that each pair corresponds to the bias and output of each unit on the hidden layer.
Here is my problem, I have a set of big gz log files, the very first info in the line is a datetime text, e.g.: 2014-03-20 05:32:00.
I need to check what set of log files holds a specific data.
For the init I simply do a:
'-query-data-'
zgrep -m 1 '^20140320-04' 20140320-0{3,4}*gz
BUT HOW to do the same with the last line without process the whole file as would be done with zcat (too heavy):
zcat foo.gz | tail -1
Additional info, those logs are created with the data time of it's initial record, so if I want to query logs at 14:00:00 I have to search, also, in files created BEFORE 14:00:00, as a file would be created at 13:50:00 and closed at 14:10:00.
The easiest solution would be to alter your log rotation to create smaller files.
The second easiest solution would be to use a compression tool that supports random access.
Projects like dictzip, BGZF, and csio each add sync flush points at various intervals within gzip-compressed data that allow you to seek to in a program aware of that extra information. While it exists in the standard, the vanilla gzip does not add such markers either by default or by option.
Files compressed by these random-access-friendly utilities are slightly larger (by perhaps 2-20%) due to the markers themselves, but fully support decompression with gzip or another utility that is unaware of these markers.
You can learn more at this question about random access in various compression formats.
There's also a "Blasted Bioinformatics" blog by Peter Cock with several posts on this topic, including:
BGZF - Blocked, Bigger & Better GZIP! – gzip with random access (like dictzip)
Random access to BZIP2? – An investigation (result: can't be done, though I do it below)
Random access to blocked XZ format (BXZF) – xz with improved random access support
Experiments with xz
xz (an LZMA compression format) actually has random access support on a per-block level, but you will only get a single block with the defaults.
File creation
xz can concatenate multiple archives together, in which case each archive would have its own block. The GNU split can do this easily:
split -b 50M --filter 'xz -c' big.log > big.log.sp.xz
This tells split to break big.log into 50MB chunks (before compression) and run each one through xz -c, which outputs the compressed chunk to standard output. We then collect that standard output into a single file named big.log.sp.xz.
To do this without GNU, you'd need a loop:
split -b 50M big.log big.log-part
for p in big.log-part*; do xz -c $p; done > big.log.sp.xz
rm big.log-part*
Parsing
You can get the list of block offsets with xz --verbose --list FILE.xz. If you want the last block, you need its compressed size (column 5) plus 36 bytes for overhead (found by comparing the size to hd big.log.sp0.xz |grep 7zXZ). Fetch that block using tail -c and pipe that through xz. Since the above question wants the last line of the file, I then pipe that through tail -n1:
SIZE=$(xz --verbose --list big.log.sp.xz |awk 'END { print $5 + 36 }')
tail -c $SIZE big.log.sp.xz |unxz -c |tail -n1
Side note
Version 5.1.1 introduced support for the --block-size flag:
xz --block-size=50M big.log
However, I have not been able to extract a specific block since it doesn't include full headers between blocks. I suspect this is nontrivial to do from the command line.
Experiments with gzip
gzip also supports concatenation. I (briefly) tried mimicking this process for gzip without any luck. gzip --verbose --list doesn't give enough information and it appears the headers are too variable to find.
This would require adding sync flush points, and since their size varies on the size of the last buffer in the previous compression, that's too hard to do on the command line (use dictzip or another of the previously discussed tools).
I did apt-get install dictzip and played with dictzip, but just a little. It doesn't work without arguments, creating a (massive!) .dz archive that neither dictunzip nor gunzip could understand.
Experiments with bzip2
bzip2 has headers we can find. This is still a bit messy, but it works.
Creation
This is just like the xz procedure above:
split -b 50M --filter 'bzip2 -c' big.log > big.log.sp.bz2
I should note that this is considerably slower than xz (48 min for bzip2 vs 17 min for xz vs 1 min for xz -0) as well as considerably larger (97M for bzip2 vs 25M for xz -0 vs 15M for xz), at least for my test log file.
Parsing
This is a little harder because we don't have the nice index. We have to guess at where to go, and we have to err on the side of scanning too much, but with a massive file, we'd still save I/O.
My guess for this test was 50000000 (out of the original 52428800, a pessimistic guess that isn't pessimistic enough for e.g. an H.264 movie.)
GUESS=50000000
LAST=$(tail -c$GUESS big.log.sp.bz2 \
|grep -abo 'BZh91AY&SY' |awk -F: 'END { print '$GUESS'-$1 }')
tail -c $LAST big.log.sp.bz2 |bunzip2 -c |tail -n1
This takes just the last 50 million bytes, finds the binary offset of the last BZIP2 header, subtracts that from the guess size, and pulls that many bytes off of the end of the file. Just that part is decompressed and thrown into tail.
Because this has to query the compressed file twice and has an extra scan (the grep call seeking the header, which examines the whole guessed space), this is a suboptimal solution. See also the below section on how slow bzip2 really is.
Perspective
Given how fast xz is, it's easily the best bet; using its fastest option (xz -0) is quite fast to compress or decompress and creates a smaller file than gzip or bzip2 on the log file I was testing with. Other tests (as well as various sources online) suggest that xz -0 is preferable to bzip2 in all scenarios.
————— No Random Access —————— ——————— Random Access ———————
FORMAT SIZE RATIO WRITE READ SIZE RATIO WRITE SEEK
————————— ————————————————————————————— —————————————————————————————
(original) 7211M 1.0000 - 0:06 7211M 1.0000 - 0:00
bzip2 96M 0.0133 48:31 3:15 97M 0.0134 47:39 0:00
gzip 79M 0.0109 0:59 0:22
dictzip 605M 0.0839 1:36 (fail)
xz -0 25M 0.0034 1:14 0:12 25M 0.0035 1:08 0:00
xz 14M 0.0019 16:32 0:11 14M 0.0020 16:44 0:00
Timing tests were not comprehensive, I did not average anything and disk caching was in use. Still, they look correct; there is a very small amount of overhead from split plus launching 145 compression instances rather than just one (this may even be a net gain if it allows an otherwise non-multithreaded utility to consume multiple threads).
Well, you can access randomly a gzipped file if you previously create an index for each file ...
I've developed a command line tool which creates indexes for gzip files which allow for very quick random access inside them:
https://github.com/circulosmeos/gztool
The tool has two options that may be of interest for you:
-S option supervise a still-growing file and creates an index for it as it is growing - this can be useful for gzipped rsyslog files as reduces to zero in the practice the time of index creation.
-t tails a gzip file: this way you can do: $ gztool -t foo.gz | tail -1
Please, note that if the index doesn't exists, this will consume the same time as a complete decompression: but as the index is reusable, next searches will be greatly reduced in time!
This tool is based on zran.c demonstration code from original zlib, so there's no out-of-the-rules magic!