I'm trying to write a simple shell script in linux that creates directories with random names.
The names must be made from the date of the day followed by a random string
like in this example:
2018-02-22y2Fdv9zzLVLupkl9El0dWalJAGTROLxE
This is the shell script
#!/bin/bash
# the date
DATAOGGI= echo -n $(date +"%Y-%m-%d")
# random string
RANDOM_STRING=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
# the dir
NEW_DIR=$(echo -n ${DATAOGGI}${RANDOM_STRING})
echo $NEW_DIR
mkdir $NEW_DIR
Unfortunately, even if the variable NEW_DIR is correct
echo $NEW_DIR -> 2018-02-22y2Fdv9zzLVLupkl9El0dWalJAGTROLxE
the name of the directory is
y2Fdv9zzLVLupkl9El0dWalJAGTROLxE
try just:
#!/bin/bash
DATAOGGI=$(date +"%Y-%m-%d")
RANDOM_STRING=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
mkdir "${DATAOGGI}${RANDOM_STRING}"
apart from fact that it is not necessary in this example echo -n AFAIK has very inconsistent behavior and it is advised to use printf instead
Related
I got a strange behavior when running a script with arguments that contains a command substitution. I would like to understand why is this behavior happening. The script is:
#!/bin/bash
# MAIL=$1
# USER=$2
PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w ${1:-20} | head -n 1);
echo "$PASSWORD"
Then I run: ./test.sh mail user, I get the error:
fold: invalid number of columns: ‘mail’
and the Password is not generated.
If I don't pass an argument or I don't generate the password, it works fine.
Update (for understanding the behavior)
I think I've found out what is happening:
When running a script with two arguments the $1 and $2 have the passed values. Example:
./test.sh arg1 arg2 have $1 -> arg1 and $2 -> arg2
When using a pipe inside a script, the original arguments are still passed and thus if you have two arguments as input you will have the piped output inserted into the third place $3.
$1 -> arg1
$2 -> arg2
$3 -> piped output
So a working solution would be:
PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w ${3:-20} | head -n 1);
but if you vary the input arguments, it will not work. Therefore the best solution is what #KamilCuk suggested:
PASSWORD=$(< /dev/urandom tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1);
If you do not wish to pass the first script argument to fold, then do not use $1 in it.
PASSWORD=$(< /dev/urandom tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1);
# ^^ - pass the number, not $1
echo "$PASSWORD"
..poky/build$ for SUBPATH in $(bitbake -e alsa-lib | grep -P -e '(?<=^)FILES_alsa-lib(?==)' | cut -d= -f2 | tr -d \") ; do ls ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package$SUBPATH 2>&1 ; done | grep -e "No such file or directory" | wc -l
2855
..poky/build$ for SUBPATH in $(bitbake -e alsa-lib | grep -P -e '(?<=^)FILES_alsa-lib(?==)' | cut -d= -f2 | tr -d \") ; do ls ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package$SUBPATH 2>&1 ; done | grep -v -e "No such file or directory" | wc -l
15
Here one of all those No such file or directory
ls: cannot access ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package/usr/lib/libicalss.so.1.0.0: No such file or directory
where
..poky/build$ bitbake -e alsa-lib | grep -P -e '(?<=^)FILES_alsa-lib(?==)' | cut -d= -f2 | tr -d \"
/usr/bin/* /usr/sbin/* /usr/lib/alsa-lib/* /usr/lib/lib*.so.* /etc /com /var /bin/* /sbin/* /lib/*.so.* /lib/udev/rules.d /usr/lib/udev/rules.d /usr/share/alsa-lib /usr/lib/alsa-lib/* /usr/share/pixmaps /usr/share/applications /usr/share/idl /usr/share/omf /usr/share/sounds /usr/lib/bonobo/servers /usr/lib/alsa-lib/smixer/*.so
and
poky/build$ echo $SHELL
/bin/bash
Apparently Bash file name expansion finds 2855 items in identified sub-paths the called ls command can't identify.
Actually in every iteration instead of ls ... I need to do find with search root point set to ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package$SUBPATH and -nameargument set few times (logical OR) to some patterns.
Where is my mistake?
Is this that file name expansion takes place in the for-loop instead of on invoking ls command (as programmer wishes it)?
#
Following alternative found under my limited expertise level and time resources
cd ./tmp-glibc/work/armv7a-vfp-neon-oe-linux-gnueabi/alsa-lib/1.0.29-r0/package && echo "/usr/bin/* /usr/sbin/* /usr/lib/alsa-lib/* /usr/lib/lib*.so.* /etc /com /var /bin/* /sbin/* /lib/*.so.* /lib/udev/rules.d /usr/lib/udev/rules.d /usr/share/alsa-lib /usr/lib/alsa-lib/* /usr/share/pixmaps /usr/share/applications /usr/share/idl /usr/share/omf /usr/share/sounds /usr/lib/bonobo/servers" | sed -r 's/(^\/)/.\//g' | sed -r 's/( \/)/ .\//g' | ls -Ralh $(awk '{print $0}') ; cd -
Glue for pasting long string with sub-paths to this command pipe from building block presented earlier out of scope as for this Q.
If possible please review. Thanks.
Does this qualify to be this Q's answer?
Hi I have the following batch script where I submitted each file to a separate processing as follows:
for file in ../Positive/*.txt_rn; do
bsub <<EOF
#BSUB -L /bin/bash
#BSUB -W 150:00
#BSUB -M 10000
#BSUB -n 3
#BSUB -e /somefolder/errors/%J.err
#BSUB -o /somefolder/errors/%J.out
while read line; do
name=`cat \$line | awk '{print $1":"$2"-"$3}'`
four=`cat \$line | awk '{print $4}' | cut -d\: -f4`
fasta=\$name".fa"
op=\$name".rs"
echo \$name | xargs samtools faidx /somefolder/rn4/Rattus_norvegicus/UCSC/rn4/Sequence/WholeGenomeFasta/genome.fa > \$fasta
Process -F \$fasta -M "list_"\$four".txt" -p 0.003 | awk '(\$5 >= 0.67)' > \$op
if [ -s "\$op" ]
then
cat "\$line" >> ../Positive_Strand/$file".cons"
fi
rm \$lne
rm \$op
rm \$fasta
done < $file
EOF
done
I am am somehow unable to store the values of the column from the line (which is in $line variable into the $name and $four variable and hence unable to carry on further processes. Also any suggestions to edit the code for a better version of it would be welcome.
If you change EOF to 'EOF' then you will more properly disable shell interpretation. Your problem is that your back-ticks (`) are not escaped.
I've fixed your indentation and cleaned up some of your code. Note that the syntax highlighting here doesn't understand cat <<'EOF'. If you paste that into vim with highlighting enabled, you'll see that block is all the same color since it's just a string.
bsub_helper() {
cat <<'EOF'
#BSUB -L /bin/bash
#BSUB -W 150:00
#BSUB -M 10000
#BSUB -n 3
#BSUB -e /somefolder/errors/%J.err
#BSUB -o /somefolder/errors/%J.out
while read line; do
name=`cat $line | awk '{print $1":"$2"-"$3}'`
four=`cat $line | awk '{print $4}' | cut -d: -f4`
fasta="$name.fa"
op="$name.rs"
genome="/somefolder/rn4/Rattus_norvegicus/UCSC/rn4/Sequence/WholeGenomeFasta/genome.fa"
echo $name | xargs samtools faidx "$genome" > "$fasta"
Process -F "$fasta" -M "list_$four.txt" -p 0.003 | awk '($5 >= 0.67)' > "$op"
if [ -s "$op" ]
then
cat "$line" >> "../Positive_Strand/$file.cons"
fi
rm "$lne" "$op" "$fasta"
EOF
echo " done < \"$1\""
}
for file in ../Positive/*.txt_rn; do
bsub_helper "$file" |bsub
done
I created a helper function because I needed to get the input in two commands. I am assuming that $file is the only variable in that block that you want interpreted. I also surrounded that variable (among others) with quotes so that the code can support file names with spaces in them. The final line of the helper has nested double quotes for this reason.
I left your echo $name | xargs … line alone because it's so odd. Without quotes around $name, xargs will take each whitespace-separated entry as its own file. With quotes, xargs will only supply one (likely invalid) file name to samtools.
If $name is a single file, try:
samtools faidx "$genome" "$name" > "$fasta"
If $name is multiple files and none of them have spaces, try:
samtools faidx "$genome" $name > "$fasta"
The only reason to use xargs here would be if you have too much content for one command line, but if you're running echo $name | xargs then you'll run into the same problem.
Ok, here's the problem.
I have a plaintext list of IP addresses that I'm blocking on my servers, growing more and more unwieldy every day (added 3000+ entries today alone).
It's already been sorted for duplicates so that's not a problem. What I'd like to do is write a script to go through it and consolidate the entries a bit better for mass blocking.
For example, take this:
2.132.35.104
2.132.79.240
2.132.99.87
2.132.236.34
2.132.245.30
And turn it into this:
2.132.0.0/16
Any suggestions on how to code that in a bash script?
UPDATE: I've worked out part-way how to do what I'm needing. Converting it to /24 is easy, as follows:
cat /usr/local/blocks/blocks.txt | while read line; do
oc1=`echo "$line" | cut -d '.' -f 1`
oc2=`echo "$line" | cut -d '.' -f 2`
oc3=`echo "$line" | cut -d '.' -f 3`
oc4=`echo "$line" | cut -d '.' -f 4`
echo "$oc1.$oc2.$oc3.0/24" >> twentyfour.srt
done
sort -u twentyfour.srt > twentyfour.txt
rm -f twentyfour.srt
ori=`cat /usr/local/blocks/blocks.txt | wc -l`
new=`cat twentyfour.txt | wc -l`
echo "$ori"
echo "$new"
That reduced it down from 4,452 entries to 4,148 entries.
Instead of having:
109.86.9.93
109.86.26.77
109.86.55.225
109.86.70.224
109.86.87.199
109.86.89.202
109.86.95.248
109.86.100.19
109.86.110.43
109.86.145.216
109.86.152.86
109.86.155.238
109.86.156.54
109.86.187.91
109.86.228.86
109.86.234.51
109.86.239.61
I now have:
109.86.100.0/24
109.86.110.0/24
109.86.145.0/24
109.86.152.0/24
109.86.155.0/24
109.86.156.0/24
109.86.187.0/24
109.86.228.0/24
109.86.234.0/24
109.86.239.0/24
109.86.26.0/24
109.86.55.0/24
109.86.70.0/24
109.86.87.0/24
109.86.89.0/24
109.86.9.0/24
109.86.95.0/24
All well and good. BUT, there's 17 entries from the 109.86.. area. In a case where the first 2 octets match more than say 5 entries on /24, I'd like to reduce that to /16.
That's where I'm stuck.
UPDATE 2:
For Steve: Here's the block list for today. And here's the result so far. Apparently it's not removing the near-duplicate entries from twentyfour that are in sixteen.
I wish I could tell you this is a simple filter. However, all of the 2.0.0.0/8 network is registered to RIPE NCC. There's just way too many different ranges of blocked IP addresses, its easier to just narrow down the scope of visitors you do want versus what you don't want.
You could also use various tools you can use to block attacks automatically.
Map to identify which is which. https://www.iana.org/numbers
Here's a script I just made for you. Then you can create the major block lists for each of the primary registries. Afrinic, Lacnic, Apnic, Ripe, and Arin.
create_tables_by_registry.sh
Just run this script... Then run the following registry.sh files. (E.g; ripe.sh)
#!/bin/bash
# Author: Steve Kline
# Date: 03-04-2014
# Designed and tested to run on properly on CentOS 6.5
#Grab Updated IANA Address Space Assignments only if Newer Version
wget -N https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.txt
assigned=ipv4-address-space.txt
arrayregistry=( afrinic apnic arin lacnic ripe )
for registry in "${arrayregistry[#]}"
do
#Clean up the ipv4-address-space.txt file and keep useable IPs
grep "$registry" $assigned | sed 's/\/8/\.0\.0\.0\/8/g'| colrm 15 > $registry-tmp1.txt
ip=($(cat $registry-tmp1.txt))
echo "#!/bin/bash" > $registry.sh
for ip in "${ip[#]}"
do
echo $ip | sed -e 's/" "//g' > $registry-tmp2.txt
#INSERT OR MODIFY YOUR COMPATIBLE FIREWALL RULES HERE
#This section creates the country to block.
echo "iptables -A INPUT -s $ip -j DROP" >> $registry.sh
chmod +x $registry.sh
done
rm $registry-tmp1.txt -f
rm $registry-tmp2.txt -f
done
Ok! Well I'm back, a little insane here and a little nutty there... I think I helped figure this out for you. I'm sure you can piece together a modification to better fit your needs.
#MODIFY FOR YOUR LIST OF IP ADDRESSES
BADIPS=block.ip
twentyfour=./twentyfour.ips #temp file for all IPs converted to twentyfour net ids
sixteen=./sixteen.ips #temp file for sixteen bit
twentyfourlst1=./twentyfour1.txt #temp file for 24 bit IDs
twentyfourlst2=./twentyfour2.txt #temp file for 24 bit IDs filtered by 16 bit IDs that match
sixteenlst=./sixteen.txt #temp file for parsed sixteenbit
#MODIFY FOR YOUR OUTPUT OF CIDR ADDRESSES
finalfile=./blockips.list #Final file post-merge
cat $BADIPS | while read line; do
oc1=`echo "$line" | cut -d '.' -f 1`
oc2=`echo "$line" | cut -d '.' -f 2`
oc3=`echo "$line" | cut -d '.' -f 3`
oc4=`echo "$line" | cut -d '.' -f 4`
echo "$oc1.$oc2.$oc3.0/24" >> $twentyfour
echo "$oc1.$oc2.0.0/16" >> $sixteen
done
awk '{i=1;while(i <= NF){a[$(i++)]++}}END{for(i in a){if(a[i]>4){print i,a[i]}}}' $sixteen | sed 's/ [0-9]\| [0-9][0-9]\| [0-9][0-9][0-9]//g' > $sixteenlst
sort -u $twentyfour > twentyfour.txt
# THIS FINDS NEAR DUPLICATES MATCHING FIRST TWO OCTETS
cat $sixteenlst | while read line; do
oc1=`echo "$line" | cut -d '.' -f 1`
oc2=`echo "$line" | cut -d '.' -f 2`
oc3=`echo "$line" | cut -d '.' -f 3`
oc4=`echo "$line" | cut -d '.' -f 4`
grep "\b$oc1.$oc2\b" twentyfour.txt >> duplicates.txt
done
#THIS REMOVES THE NEAR DUPLICATES FROM THE TWENTYFOUR FILE
fgrep -vw -f duplicates.txt twentyfour.txt > twentyfourfinal.txt
#THIS MERGES BOTH RESULTS
cat twentyfourfinal.txt $sixteenlst > $finalfile
sort -u $finalfile
ori=`cat $BADIPS | wc -l`
new=`cat $finalfile | wc -l`
echo "$ori"
echo "$new"
#LAST MIN CLEANUP
rm -f $twentyfour $twentyfourlst $sixteen $sixteenlst duplicates.txt twentyfourfinal.txt
Going Back to fix: I noted a problem... Originally unsuccessful.
`grep "$oc1.$oc1" twentyfour.txt > duplicates.txt
For Example: The old script had bad results with this test IP range... the updated version now above... Does exactly as its intended. match the octet exactly.. and not a similar.
192.168.1.1
192.168.2.50
192.168.5.23
192.168.14.10
192.168.10.5
192.168.24.25
192.165.20.10
10.192.168.30
5.76.10.20
5.76.20.30
5.76.250.10
5.76.34.10
5.76.50.30
95.76.30.1 - Old script matched this to 5.76
20.20.5.5
20.20.10.10
20.20.16.50
20.20.205.20
20.20.60.20
205.20.16.20 - not a problem
20.205.150.150 - Old script matched this to 20.20
220.20.16.0 - Also failed without adding -w parameter to the last grep to only match exact strings.
I'm trying to do the following in a bash script:
com=`ssh host "ls -lh"`
echo $com
It works, but the echo will break the output (instead of getting all lines in a column, I get them all in a row).
If I do: ssh host ls -lh in the CLI it will give me the correct output and layout.
How can I preserve the layout when echoing a variable?
You need:
echo "$com"
The quotes make the shell not break the value up into "words", but pass it as a single argument to echo.
Put double quotes around $com:
com=`ssh host "ls -lh"`
printf "%s" $com | tr -dc '\n' | wc -c # count newlines
printf "%s" "$com" | tr -dc '\n' | wc -c
echo "$com"