How to move and rename a file with random characters in shell? - bash

I have this file:
/root/.aria2/aria2.txt
and I want to move it to:
/var/spool/sms/outgoing/aria2_XXXXX
Note that XXXXX are random characters.
How do I do that using only the facilities exposed by the openwrt (a GNU/Linux distribution for embedded devices) and the ash shell?

A simple way of generating a semi-random number in bash is to use the date +%N command or the system provided $RANDOM:
rn=$(date +%N) # Nanoseconds
rn=${rn:3:5} # to limit to 5 digits
or, using $RANDOM, you need to check you have sufficient digits for your purpose. If 5 is the number of digits you need:
rn=$RANDOM
while [ ${#rn} -lt 5 ]; do
rn="${rn}${RANDOM}"
done
rn=${rn:0:5}
To move while providing the random suffix:
mv /root/.aria2/aria2.txt /var/spool/sms/outgoing/aria2_${rn}

On systems with /dev/random you can obtain a string of random ASCII characters with something like
dd if=/dev/random count=1 |
tr -dc ' -~' |
dd bs=8 count=1
Set the bs= in the second instance to the amount of characters you want.
The probability of getting the same result twice is very low, but you have not told us what is an acceptable range. You should understand (or help us help you understand) what is an acceptable probability in your scenario.

Use the tempfile command
mv aria2.txt `tempfile -d $dir -p aria2`
see man tempfile for the gory details.

Related

How to count items in an array with leading zeros in a bash script?

I have a loop that's going through files in a dir and renaming them.
for f in *
do
mv -n "$f" "$prefix_$(date -r "$f" +"%Y%m%d").${f#*.}"
done
I need to append the sequence number at the end, like
TEST_20200505_00001.NEF
TEST_20200505_00002.NEF
TEST_20200505_00155.NEF
How can I go about doing that?
Using only standard tools (bash, mv, date)
Add a variable that counts up with each loop iteration. Then use printf to add the leading zeros.
#! /usr/bin/env bash
c=1
for f in *; do
mv -n "$f" "${prefix}_$(date -r "$f" +%Y%m%d)_$(printf %05d "$c").${f#*.}"
(( c++ ))
done
Here we used a fixed width of five digits for the sequence number. If you only want to use as few digits as required you can use the following approach:
#! /usr/bin/env bash
files=(*)
while read i; do
f="${files[10#$i]}"
mv -n "$f" "${prefix}_$(date -r "$f" +%Y%m%d)_$i.${f#*.}"
done < <(seq -w 0 "$((${#files[#]}-1))")
This will use 1-digit numbers if there are only up to 9 files, 2 digits if there are only 10 to 99 files, 3 digits for 100 to 999 files, and so on.
In case you wonder about the 10#$i: Bash interprets numbers with leading zeros as octal numbers, that is 010 = 8 and 08 = error. To interpret the numbers generated by seq -w correctly we specify the base manually using the prefix 10#.
Using a dedicated tool
For bulk renaming files, you don't have to re-invent the wheel. Install a tool like rename/perl-rename/prename and use something like
perl-rename '$N=sprintf("%05d",++$N); s/(.*)(\.[^.]*)/$1$N$2/' *
Here I skipped the date part, because you never showed the original names of your files. A simple string manipulation would probably sufficient to convert the date to YYYYMMDD format.

Is there a fast way to read alternate bytes in dd

I'm trying to read out every other pair of bytes in a binary file using dd in a loop, but it is unusably slow.
I have a binary file on a BusyBox embedded device containing data in rgb565 format. Each pixel is 2 bytes and I'm trying to read out every other pixel to do very basic image scaling to reduce file size.
The overall size is 640x480 and I've been able to read every other "row" of pixels by looping dd with a 960 byte block size. But doing the same for every other "column" that remains by looping through with a 2 byte block size is ridiculously slow even on my local system.
i=1
while [[ $i -le 307200 ]]
do
dd bs=2 skip=$((i-1)) seek=$((i-1)) count=1 if=./tmpfile >> ./outfile 2>/dev/null
let i=i+2
done
While I get the output I expect, this method is unusable.
Is there some less obvious way to have dd quickly copy every other pair of bytes?
Sadly I don't have much control over what gets compiled in to BusyBox. I'm open to other possible methods but a dd/sh solution may be all I can use. For instance, one build has omitted head -c...
I appreciate all the feedback. I will check out each of the various suggestions and check back with results.
Skipping every other character is trivial for tools like sed or awk as long as you don't need to cope with newlines and null bytes. But Busybox's support for null bytes in sed and awk is poor enough that I don't think you can cope with them at all. It's possible to deal with newlines, but it's a giant pain because there are 16 different combinations to deal with depending on whether each position in a 4-byte block is a newline or not.
Since arbitrary binary data is a pain, let's translate to hexadecimal or octal! I'll draw some inspiration from bin2hex and hex2bin scripts by Stéphane Chazelas. Since we don't care about the intermediate format, I'll use octal, which is a lot simpler to deal with because the final step uses printf which only supports octal. Stéphane's hex2bin uses awk for the hexadecimal-to-octal conversion; a oct2bin can use sed. So in the end you need sh, od, sed and printf.
I don't think you can avoid printf: it's critical to outputting null bytes. While od is essential, most of its options aren't, so it should be possible to tweak this code to support a very stripped-down od with a bit more postprocessing.
od -An -v -t o1 -w4 |
sed 's/^ \([0-7]*\) \([0-7]*\).*/printf \\\\\1\\\\\2/' |
sh
The reason this is so fast compared to your dd-based approach is that BusyBox runs printf in the parent process, whereas dd requires its own process. Forking is slow. If I remember correctly, there's a compilation option which makes BusyBox fork for all utilities. In this case my approach will probably be as slow as yours. Here's an intermediate approach using dd which can't avoid the forks, but at least avoids opening and closing the file every time. It should be a little faster than yours.
i=$(($(wc -c <"$1") / 4))
exec <"$1"
dd ibs=2 count=1 conv=notrunc 2>/dev/null
while [ $i -gt 1 ]; do
dd ibs=2 count=1 skip=1 conv=notrunc 2>/dev/null
i=$((i - 1))
done
No idea if this will be faster or even possible with BusyBox, but it's a thought...
#!/bin/bash
# Empty result file
> result
exec 3< datafile
while true; do
# Read 2 bytes into file "short"
dd bs=2 count=1 <&3 > short 2> /dev/null
[ ! -s short ] && break
# Accumulate result file
cat short >> result
# Read two bytes and discard
dd bs=2 count=1 <&3 > short 2> /dev/null
[ ! -s short ] && break
done
Or this should be more efficient:
#!/bin/bash
exec 3< datafile
for ((i=0;i<76800;i++)) ; do
# Skip 2 bytes then read 2 bytes
dd bs=2 count=1 skip=1 <&3 2> /dev/null
done > result
Or, maybe you could use netcat or ssh to send the file to a sensible (more powerful) computer with proper tools to process it and return it. For example, if the remote computer had ImageMagick it could down-scale the image very simply.
Another option might be to use Lua which has a reputation for being small, fast and well suited to embedded systems - see Lua website. There are pre-built, downloadable binaries of it there too. It is also suggested on the Busybox website here.
I have never written any Lua before, so there may be some inefficiencies but this seems to work pretty well and processes a 640x480 RGB565 image in a few milliseconds on my desktop.
-- scale.lua
-- Usage: lua scale.lua input.bin output.bin
-- Scale an image by skipping alternate lines and alternate columns
-- Set up width, height and bytes per pixel
w = 640
h = 480
bpp = 2
-- Open first argument for input, second for output
inp = assert(io.open(arg[1], "rb"))
out = assert(io.open(arg[2], "wb"))
-- Read image, one line at a time
for i = 0, h-1, 1 do
-- Read a whole line
line = inp:read(w*bpp)
-- Only use every second line
if (i % 2) == 0 then
io.write("DEBUG: Processing row: ",i,"\n")
-- Build up new, reduced line by picking substrings
reduced=""
for p = 1, w*bpp, bpp*2 do
reduced = reduced .. string.sub(line,p,p+bpp-1)
end
io.write("DEBUG: New line length in bytes: ",#reduced,"\n")
out:write(reduced)
end
end
assert(out:close())
I created a greyscale test image with ImageMagick as follows:
magick -depth 16 -size 640x480 gradient: gray:image.bin
Then I ran the above Lua script with:
lua scale.lua image.bin smaller.bin
Then I made a JPEG I could view for testing with:
magick -depth 16 -size 320x240 gray:smaller.bin smaller.jpg

generating every possible letter and number combination 8 and 63 characters long. in bash

How would a generate every possible letter and number combination into a word list something kind of like "seq -w 0000000000-9999999999 > word-list.txt" or like "echo {a..z}{0..9}{Z..A}", but I need to include letters and length options. Any help? And side info, this will be run on a GTX980 so it won't be too slow, but I am worried about the storage issue, If you have any solutions please let me know.
file1.sh:
#!/bin/bash
echo \#!/bin/bash > file2.sh
for i in $(seq $1 $2)
do
echo -n 'echo ' >> file2.sh
for x in $(seq 1 $i)
do
echo -n {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,\
A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z,\
0,1,2,3,4,5,6,7,8,9} >> file2.sh
done
echo
done
I don't think it's a good idea, but this ought to generate a file which when executed will generate all possible alphanumeric sequences with lengths between the first and second arguments inclusive. Please don't actually try it for 8 through 64, I'm not sure what will happen but there's no way it will work. Sequences of the same length will be space separated, sequences of different lengths will be separated by newlines. You can send it through tr if you want to replace the spaces with newlines. I haven't tested it and I clearly didn't bother with silly things like input validation, but it seems about right.
If you're doing brute force testing, there should be no need to save every combination somewhere, given how easy and (comparatively) quick it is to generate on the fly. You might also consider looking at one of the many large password files that's available on the internet.
P.S. You should try doing the math to figure out approximately how much you would have to spend on 1TB hard drives to store the file you wanted.

converting number to bitfield string in bash

What might be the most concise way in bash to convert a number into a bitfield character string like 1101?
In effect I am trying to do the opposite of
echo $[2#1101]
Why: I need to send a parameter to a program that takes bitfields in the form of a full string like "0011010110" but often only need to enable one or few bits as in:
SUPPRESSbits=$[1<<16] runscript.sh # OR
SUPPRESSbits=$[1<<3 + 1<<9] runscript.sh # much more readable when I know what bits 3 and 9 toggle in the program
Then runscript.sh then sees in its env a SUPPRESSbits=65536 rather than SUPPRESSbits="1000000000000000" and ends in parse error.
The easy way:
$ dc <<<2o123p
1111011
$ bc <<<'obase=2; 123'
1111011
I doubt about bash but you always can use perl:
a=123; b=$(perl -e 'printf "%b", "'$a'"'); echo $b
1111011

how to remove decimal from a variable?

how to remove decimal place in shell script.i am multiplying MB with bytes to get value in bytes .I need to remove decimal place.
ex:-
196.3*1024*1024
205835468.8
expected output
205835468
(You did not mention what shell you're using; this answer assumes Bash).
You can remove the decimal values using ${VAR%.*}. For example:
[me#home]$ X=$(echo "196.3 * 1024 * 1024" | bc)
[me#home]$ echo $X
205835468.8
[me#home]$ echo ${X%.*}
205835468
Note that this truncates the value rather than rounds it. If you wish to round it, use printf as shown in Roman's answer.
The ${variable%pattern} syntax deletes the shortest match of pattern starting from tbe back of variable. For more information, read http://tldp.org/LDP/abs/html/string-manipulation.html
Use printf:
printf %.0f $float
This will perform rounding. So if float is 1.8, it'll give you 2.

Resources