To get \n instead of n in echo -e command in shell script - shell

I am trying to get the output for the echo -e command as shown below
Command used:
echo -e "cd \${2}\nfilesModifiedBetweenDates=\$(find . -type f -exec ls -l --time-style=full-iso {} \; | awk '{print \$6,\$NF}' | awk '{gsub(/-/,\"\",\$1);print}' | awk '\$1>= '$fromDate' && \$1<= '$toDate' {print \$2}' | tr \""\n"\" \""\;"\")\nIFS="\;" read -ra fileModifiedArray <<< "\$filesModifiedBetweenDates"\nfor fileModified in \${fileModifiedArray[#]}\ndo\n egrep -w "\$1" "\$fileModified" \ndone"
cd ${2}
Expected output:
cd ${2}
filesModifiedBetweenDates=$(find . -type f -exec ls -l --time-style=full-iso {} \; | awk '{print $6,$NF}' | awk '{gsub(/-/,"",$1);print}' | awk '$1>= '20140806' && $1<= '20140915' {print $2}' | tr "\n" ";")
IFS=; read -ra fileModifiedArray <<< $filesModifiedBetweenDates
for fileModified in ${fileModifiedArray[#]}
do
egrep -w $1 $fileModified
done
Original Ouput:
cd ${2}
filesModifiedBetweenDates=$(find . -type f -exec ls -l --time-style=full-iso {} \; | awk '{print $6,$NF}' | awk '{gsub(/-/,"",$1);print}' | awk '$1>= '20140806' && $1<= '20140915' {print $2}' | tr "n" ";")
IFS=; read -ra fileModifiedArray <<< $filesModifiedBetweenDates
for fileModified in ${fileModifiedArray[#]}
do
egrep -w $1 $fileModified
done
How can i handle "\" in this ?

For long blocks of text, it's much simpler to use a quoted here document than trying to embedded a multi-line string into a single argument to echo or printf.
cat <<"EOF"
cd ${2}
filesModifiedBetweenDates=$(find . -type f -exec ls -l --time-style=full-iso {} \; | awk '{print $6,$NF}' | awk '{gsub(/-/,"",$1);print}' | awk '$1>= '20140806' && $1<= '20140915' {print $2}' | tr "\n" ";")
IFS=; read -ra fileModifiedArray <<< $filesModifiedBetweenDates
for fileModified in ${fileModifiedArray[#]}
do
egrep -w $1 $fileModified
done
EOF

You'd better use printf to have a better control:
$ printf "tr %s %s\n" '"\n"' '";"'
tr "\n" ";"
As you see, we indicate the parameters within double quotes: printf "text %s %s" and then we define what content should be stored in this parameters.
In case you really have to use echo, then escape the \:
$ echo -e 'tr "\\n" ";"'
tr "\n" ";"
Interesting read: Why is printf better than echo?

Related

I want my script to echo "$1" into a file literally

This is part of my script
#!/bin/bash
echo "ls /SomeFolder | grep $1 | xargs cat | grep something | grep .txt | awk '{print $2}' | sed 's/;$//';" >> script2.sh
This echos everything nicely into my script except $1 and $2. Instead of that it outputs the input of those variables but i want it to literally read "$1" and "$2". Help?
Escape it:
echo "ls /SomeFolder | grep \$1 | xargs cat | grep something | grep .txt | awk '{print \$2}' | sed 's/;\$//';" >> script2.sh
Quote it:
echo "ls /SomeFolder | grep "'$'"1 | xargs cat | grep something | grep .txt | awk '{print "'$'"2}' | sed 's/;"'$'"//';" >> script2.sh
or like this:
echo 'ls /SomeFolder | grep $1 | xargs cat | grep something | grep .txt | awk '\''{print $2}'\'' | sed '\''s/;$//'\'';' >> script2.sh
Use quoted here document:
cat << 'EOF' >> script2.sh
ls /SomeFolder | grep $1 | xargs cat | grep something | grep .txt | awk '{print $2}' | sed 's/;$//';
EOF
Basically you want to prevent expansion, ie. take the string literaly. You may want to read bashfaq quotes
First, you'd never write this (see https://mywiki.wooledge.org/ParsingLs, http://porkmail.org/era/unix/award.html and you don't need greps+seds+pipes when you're using awk):
ls /SomeFolder | grep $1 | xargs cat | grep something | grep .txt | awk '{print $2}' | sed 's/;$//'`
you'd write this instead:
find /SomeFolder -mindepth 1 -maxdepth 1 -type f -name "*$1*" -exec \
awk '/something/ && /.txt/{sub(/;$/,"",$2); print $2}' {} +
or if you prefer using print | xargs instead of -exec:
find /SomeFolder -mindepth 1 -maxdepth 1 -type f -name "*$1*" -print0 |
xargs -0 awk '/something/ && /.txt/{sub(/;$/,"",$2); print $2}'
and now to append that script to a file would be:
cat <<'EOF' >> script2.sh
find /SomeFolder -mindepth 1 -maxdepth 1 -type f -name "*$1*" -print0 |
xargs -0 awk '/something/ && /.txt/{sub(/;$/,"",$2); print $2}'
EOF
Btw, if you want the . in .txt to be treated literally instead of as a regexp metachar meaning "any character" then you should be using \.txt instead of .txt.

Getting a list of substring based unique filenames in an array

I have a directory my_dir with files having names like:
a_v5.json
a_v5.mapping.json
a_v5.settings.json
f_v39.json
f_v39.mapping.json
f_v39.settings.json
f_v40.json
f_v40.mapping.json
f_v40.settings.json
c_v1.json
c_v1.mapping.json
c_v1.settings.json
I'm looking for a way to get an array [a_v5, f_v40, c_v1] in bash. Here, array of file names with the latest version number is what I need.
Tried this: ls *.json | find . -type f -exec basename "{}" \; | cut -d. -f1, but it returns the results with files which are not of the .json extension.
You can use the following command if your filenames don't contain whitespace and special symbols like * or ?:
array=($(
find . -type f -iname \*.json |
sed -E 's|(.*/)*(.*_v)([0-9]+)\..*|\2 \3|' |
sort -Vr | sort -uk1,1 | tr -d ' '
))
It's ugly and unsafe. The following solution is longer but can handle all file names, even those with linebreaks in them.
maxversions() {
find -type f -iname \*.json -print0 |
gawk 'BEGIN { RS = "\0"; ORS = "\0" }
match($0, /(.*\/)*(.*_v)([0-9]+)\..*/, group) {
prefix = group[2];
version = group[3];
if (version > maxversion[prefix])
maxversion[prefix] = version
}
END {
for (prefix in maxversion)
print prefix maxversion[prefix]
}'
}
mapfile -d '' array < <(maxversions)
In both cases you can check the contents of array with declare -p array.
Arrays and bash string parsing.
declare -A tmp=()
for f in $SOURCE_DIR/*.json
do f=${f##*/} # strip path
tmp[${f%%.*}]=1 # strip extraneous data after . in filename
done
declare -a c=( $( printf "%s\n" "${!tmp[#]}" | cut -c 1 | sort -u ) ) # get just the first chars
declare -a lst=( $( for f in "${c[#]}"
do printf "%s\n" "${!tmp[#]}" |
grep "^${f}_" |
sort -n |
tail -1; done ) )
echo "[ ${lst[#]} ]"
[ a_v5 c_v1 f_v40 ]
Or, if you'd rather,
declare -a arr=( $(
for f in $SOURCE_DIR/*.json
do d=${f%/*} # get dir path
f=${f##*/} # strip path
g=${f:0:2} # get leading str
( cd $d && printf "%s\n" ${g}*.json |
sort -n | sed -n '$ { s/[.].*//; p; }' )
done | sort -u ) )
echo "[ ${arr[#]} ]"
[ a_v5 c_v1 f_v40 ]
This is one possible way to accomplish this :
arr=( $( { for name in $( ls {f,n,m}*.txt ); do echo ${name:0:1} ; done; } | sort | uniq ) )
Output :
$ echo ${arr[0]}
f
$ echo ${arr[1]}
m
$ echo ${arr[2]}
n
Regards!
AWK SOLUTION
This is not an elegant solution... my knowledge of awk is limited.
You should find this functional.
I've updated this to remove the obsolete uniq as suggested by #socowi.
I've also included the printf version as #socowi suggested.
ls *.json | cut -d. -f1 | sort -rn | awk -v last="xx" '$1 !~ last{ print $1; last=substr($1,1,3) }'
OR
printf %s\\n *.json | cut -d. -f1 | sort -rn | awk -v last="xx" '$1 !~ last{ print $1; last=substr($1,1,3) }'
Old understanding below
Find files with name matching pattern.
Now take the second field since your results will likely be similar to ./
find . -type f -iname "*.json" | cut -d. -f2
To get the unique headings....
find . -type f -iname "*.json" | cut -d. -f2 | sort | uniq

Perform a CAT in FOR and SSH

I do not have much experience in shell script, therefore I need your help. I have the following query, I need to make a CAT to the files that I list, but I have not managed to know where to place the command. Thank you:
read date
echo -e "RECORDINGS"
for e in $Rec
do
sshpass -p password ssh user#server find $e "-type f -mtime -10 -exec ls -gGh --full-time {} \;" | cut -d ' ' -f 4,7 | grep $date | awk -F " " '{print $2}'
done
Ignoring that much of the code here is outright dangerous --
find_on_server() {
local e_q
printf -v e_q '%q ' "$1"
sshpass -p password ssh user#server "bash -s $e_q" <<'EOF'
e=$1
find "$e" -type f -mtime -10 -exec ls -gGh --full-time {} \;
EOF
# ^^^ the above line MUST NOT BE INDENTED
}
find_on_server "$e" | cut -d ' ' -f 4,7 | grep $date | awk -F " " '{print $2}'

Find TXT files and show Total Count of records of each file and Size of each file

I need to find row Count and size of each TXT files.
It needs to search all the directories and just show result as :
FileName|Cnt|Size
ABC.TXT|230|23MB
Here is some code:
v_DIR=$1
echo "the directory to cd is "$1
x=`ls -l $0 | awk '{print $9 "|" $5}'`
y=`awk 'END {print NR}' $0`
echo $x '|' $y
Try something like
find -type f -name '*.txt' -exec bash -c 'lines=$(wc -l "$0" | cut -d " " -f1); size=$(du -h "$0" | cut -f1); echo "$0|$lines|$size"' {} \;

bash command substitution force to foreground

I have this:
echo -e "\n\n"
find /home/*/var/*/logs/ \
-name transfer.log \
-exec awk -v SUM=0 '$0 {SUM+=1} END {print "{} " SUM}' {} \; \
> >( sed '/\b0\b/d' \
| awk ' BEGIN {printf "\t\t\tTRANSFER LOG\t\t\t\t\t#OF HITS\n"}
{printf "%-72s %-s\n", $1, $2}
' \
| (read -r; printf "%s\n" "$REPLY"; sort -nr -k2)
)
echo -e "\n\n"
When run on a machine with bash 4.1.2 always returns correctly except I get all 4 of my new lines at the top.
When run on a machine with bash 3.00.15 it gives all 4 of my new lines at the top, returns the prompt in the middle of the output, and never completes just hangs.
I would really like to fix this for both versions as we have a lot of machines running both.
Why make life so difficult and unintelligible? Why not simplify?
TXFRLOG=$(find /home..... transfer.log)
awk .... ${TXFRLOG}
The answer I found was to use a while read
echo -e "\n\n"; \
printf "\t\t\tTRANSFER LOG\t\t\t\t\t#OF HITS\n"; \
while read -r line; \
do echo "$line" |sed '/\b0\b/d' | awk '{printf "%-72s %-s\n", $1, $2}'; \
done < <(find /home/*/var/*/logs/ -name transfer.log -exec awk -v SUM=0 '$0 {SUM+=1} END{print "{} " SUM}' {} \;;) \
|sort -nr -k2; \
echo -e "\n\n"

Resources