bash save values in variable - bash

given: file: files.txt with:
sara.gap sara.gao
pe.gap pe.gao
I just want to use f=sara in my bash skript, because I need f later in the skript. so i tryed: get ffirst line,second argument,remove .gao and save in f
f=sed -ne '1p' files.txt |cut -d " " -f2 |sed 's/.gao//g'
But did not work, please help me ;(

You just need backticks:
f=`head -1 files.txt | cut -d " " -f2 | sed 's/.gao//g'`

I'd do
read f junk < files.txt
f=${f%*.gap}
oh, and for second argument:
read junk f junk < files.txt
f=${f%*.gao}
That's completely in bash :-)

Use Command Substitution if you want to use the output of a command to set a variable. The format is v=$(command). You can also use backticks e.g. v=`command`, but this has been superseded by the $(...) form.
Your command would be:
f=$(sed -ne '1p' files.txt |cut -d " " -f2 |sed 's/.gao//g')
echo $f
prints
sara

you mean this?
f="sed -ne '1p' files.txt |cut -d ' ' -f2 |sed 's/.gao//g'"
eval "$f"
with output:
sara

You can use awk as well
f=$(awk 'NR==1{gsub(/\.gao/,"",$2);print $2;exit}' file)

Related

sh to read a file and take particular value in shell

I need to read a json file and take value like 99XXXXXXXXXXXX0 and cccs and write in csv which having column BASE_No and Schedule.
Input file: classedFFDCD_5666_4888_45_2018_02112018012106.021.json
"bfgft":"99XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"cccs"
"bfgft":"21XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"nncs"
"bfgft":"56XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"fgbs"
"bfgft":"44XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"ddss"
"bfgft":"94XXXXXXXXXXXX0","fp":"XXXXXX","cur_gt":225XXXXXXXX0,"jjjs"
Expected output:
BASE_No,Schedule
99XXXXXXXXXXXX0,cccs
21XXXXXXXXXXXX0,nncs
56XXXXXXXXXXXX0,fgbs
44XXXXXXXXXXXX0,ddss
94XXXXXXXXXXXX0,jjjs
I am using below code for reading file name and date, but unable to read file for BASE_No,Schedule.
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for line in `ls -lrt *.json`; do
date=$(echo $line |awk -F ' ' '{print $6" "$7}');
file=$(echo $line |awk -F ' ' '{print $9}');
echo ''$file','$(date "+%Y/%m/%d %H.%M.%S")'' >> $File_Tracker`
Assuming the structure of the json doesnt change for every line, the sample code checks through line by line to retrieve the particular value and concatenates using printf. The output is then stored as new output.txt file.
#!/bin/bash
input="/home/kj4458/winhome/Downloads/sample.json"
printf "Base,Schedule \n" > output.txt
while IFS= read -r var
do
printf "`echo "$var" | cut -d':' -f 2 | cut -d',' -f 1`,`echo "$var" | cut -d':' -f 4 | cut -d',' -f 2` \n" | sed 's/"//g' >> output.txt
done < "$input"
awk -F " \" " ' {print $4","$12 }' file
99XXXXXXXXXXXX0,cccs
21XXXXXXXXXXXX0,nncs
56XXXXXXXXXXXX0,fgbs
44XXXXXXXXXXXX0,ddss
94XXXXXXXXXXXX0,jjjs
I got that result!

Shell sed command

I have paths.txt like:
pathO1/:pathD1/
pathO2/:pathD2/
...
pathON/:pathDN/
How can I 'sed' insert ' * ' after each pathOX/ ?
The script is:
while read line
do
cp $(echo $line | tr ':' ' ')
done < "paths.txt"
substituted by:
while read line
do
cp $(echo $line | sed 's/:/* /1')
done < "paths.txt"
This looks to be a similar question to which you asked earlier: Shell Script: Read line in file
Just apply the trick of removing additional '*' before appliying tr like:
cp $(echo $line | sed 's/\*//1' | tr ':' '* ')
while read line
do
path=`echo "$line" | sed 's/:/ /g'`
cmd="cp $path"
echo $cmd
eval $cmd
done < "./paths.txt"
quick and dirty awk one-liner without loop to do the job:
awk -F: '$1="cp "$1' paths.txt
this will output:
cp /home/Documents/shellscripts/Origen/* /home/Documents/shellscripts/Destino/
cp /home/Documents/shellscripts/Origen2/* /home/Documents/shellscripts/Destino2/
...
if you want the cmds to get executed:
awk -F: '$1="cp "$1' paths.txt|sh
I said it quick & dirty, because:
the format must be path1:path2
your path cannot contain special letters (like space) or :
Using pure shell
while IFS=: read -r p1 p2
do
cp $p1 "$p2"
done < file

How to reduce the use of `echo` in a bash script?

My bash script contains the following line:
echo $(echo "$STRING_VAR" | cut -d' ' -f 2) >> $FILE
Here we have two echo calls, but are they really necessary ?
I wrote them, because otherwise the bash would think the string in first place is a command.
Simply echo "$STRING_VAR" | cut -d' ' -f 2 >> $FILE does the same thing.
echo "$STRING_VAR" | cut -d' ' -f 2 >> $FILE
should be all you need
Also, bash has the handy "here-string" redirection mode: you don't need echo at all:
cut -d' ' -f2 <<< "$STRING_VAR" >> "$FILE"

bash: grep only lines with certain criteria

I am trying to grep out the lines in a file where the third field matches certain criteria.
I tried using grep but had no luck in filtering out by a field in the file.
I have a file full of records like this:
12794357382;0;219;215
12795287063;0;220;215
12795432063;0;215;220
I need to grep only the lines where the third field is equal to 215 (in this case, only the third line)
Thanks a lot in advance for your help!
Put down the hammer.
$ awk -F ";" '$3 == 215 { print $0 }' <<< $'12794357382;0;219;215\n12795287063;0;220;215\n12795432063;0;215;220'
12795432063;0;215;220
grep:
grep -E "[^;]*;[^;]*;215;.*" yourFile
in this case, awk would be easier:
awk -F';' '$3==215' yourFile
A solution in pure bash for the pre-processing, still needing a grep:
while read line; do
OLF_IFS=$IFS; IFS=";"
line_array=( $line )
IFS=$OLD_IFS
test "${line_array[2]}" = 215 && echo "$line"
done < file | grep _your_pattern_
Simple egrep (=grep -E)
egrep ';215;[0-d][0-d][0-d]$' /path/to/file
or
egrep ';215;[[:digit:]]{3}$' /path/to/file
How about something like this:
cat your_file | while read line; do
if [ `echo "$line" | cut -d ";" -f 3` == "215" ]; then
# This is the line you want
fi
done
Here is the sed version to grep for lines where 3rd field is 215:
sed -n '/^[^;]*;[^;]*;215;/p' file.txt
Simplify your problem by putting the 3rd field at the beginning of the line:
cut -d ";" -f 3 file | paste -d ";" - file
then grep for the lines matching the 3rd field and remove the 3rd field at the beginning:
grep "^215;" | cut -d ";" -f 2-
and then you can grep for whatever you want. So the complete solution is:
cut -d ";" -f 3 file | paste -d ";" - file | grep "^215;" | cut -d ";" -f 2- | grep _your_pattern_
Advantage: Easy to understand; drawback: many processes.

hash each line in text file

I'm trying to write a little script which will open a text file and give me an md5 hash for each line of text. For example I have a file with:
123
213
312
I want output to be:
ba1f2511fc30423bdbb183fe33f3dd0f
6f36dfd82a1b64f668d9957ad81199ff
390d29f732f024a4ebd58645781dfa5a
I'm trying to do this part in bash which will read each line:
#!/bin/bash
#read.file.line.by.line.sh
while read line
do
echo $line
done
later on I do:
$ more 123.txt | ./read.line.by.line.sh | md5sum | cut -d ' ' -f 1
but I'm missing something here, does not work :(
Maybe there is an easier way...
Almost there, try this:
while read -r line; do printf %s "$line" | md5sum | cut -f1 -d' '; done < 123.txt
Unless you also want to hash the newline character in every line you should use printf or echo -n instead of echo option.
In a script:
#! /bin/bash
cat "$#" | while read -r line; do
printf %s "$line" | md5sum | cut -f1 -d' '
done
The script can be called with multiple files as parameters.
You can just call md5sum directly in the script:
#!/bin/bash
#read.file.line.by.line.sh
while read line
do
echo $line | md5sum | awk '{print $1}'
done
That way the script spits out directly what you want: the md5 hash of each line.
this worked for me..
cat $file | while read line; do printf %s "$line" | tr -d '\r\n' | md5 >> hashes.csv; done

Resources