This question already has answers here:
How to pass parameters to a Bash script?
(4 answers)
Closed 1 year ago.
At the beginning I have a file.txt, which contains several informations that I will take using the grep command as you see in the script.
What I want is to give the script the file I want instead of file.txt but without changing the file name each time in the script for example if the file is named Me.txt I don’t want to go into the script and write Me.txt in each grep command especially if I have dozens of orders.
Is there a way to do this?
#!/bin/bash
grep teste file.txt > testline.txt
awk '{print $2}' testline.txt > test.txt
echo '#'
echo '#'
grep remote file.txt > remoteline.txt
awk '{print $3}' remoteline.txt > remote.txt
echo '#'
echo '#'
grep adresse file.txt > adresseline.txt
awk '{print $2}' adresseline.txt > adresse.txt
Using a parameter, as many contributors here suggested, is of course the obvious approach, and the one which is usually taken in such case, so I want to extend this idea:
If you do it naively as
filename=$1
you have to supply the name on every invocation. You can improve on this by providing a default value for the case the parameter is missing:
filename=${1:-file.txt}
But sometimes you are in a situation, where for some time (working on a specific task), you always need the same filename over and over, and the default value happens to be not the one you need. Another possibility to pass information to a program is via the environment. If you set the filename by
filename=${MOOFOO:-file.txt}
it means that - assuming your script is called myscript.sh - if you invoke your script by
MOOFOO=myfile.txt myscript.sh
it uses myfile.txt, while if you call it by
myscript.sh
it uses the default file.txt. You can also set MOOFOO in your shell, as
export MOOFOO=myfile.txt
and then, even a lone execution of
myscript.sh
with use myfile.txt instead of the default file.txt
The most flexible approach is to combine both, and this is what I often do in such a situation. If you do in your script a
filename=${1:-${MOOFOO:-file.txt}}
it takes the name from the 1st parameter, but if there is no parameter, takes it from the variable MOOFOO, and if this variable is also undefined, uses file.txt as the last fallback.
You should pass the filename as a command line parameter so that you can call your script like so:
script <filename>
Inside the script, you can access the command line parameters in the variables $1, $2,.... The variable $# contains the number of command line parameters passed to the script, and the variable $0 contains the path of the script itself.
As with all variables, you can choose to put the variable name in curly brackets which has advantages sometimes: ${1}, ${2}, ...
#!/bin/bash
if [ $# = 1 ]; then
filename=${1}
else
echo "USAGE: $(basename ${0}) <filename>"
exit 1
fi
grep teste "${filename}" > testline.txt
awk '{print $2}' testline.txt > test.txt
echo '#'
echo '#'
grep remote "${filename}" > remoteline.txt
awk '{print $3}' remoteline.txt > remote.txt
echo '#'
echo '#'
grep adresse "${filename}" > adresseline.txt
awk '{print $2}' adresseline.txt > adresse.txt
By the way, you don't need two different files to achieve what you want, you can just pipe the output of grep straight into awk, e.g.:
grep teste "${filename}" | awk '{print $2}' > test.txt
but then again, awk can do the regex match itself, reducing it all to just one command:
awk '/teste/ {print $2}' "${filename}" > test.txt
Related
I have this variable:
a='/08/OPT/imaginary/N/08_i_N.out'
I want to use "/" as a field separator.
Then, I want to extract the first pattern.
I have tried:
awk -F/ '{print $1}' "$a"
But I get:
awk: cannot open /08/OPT/imaginary/N/08_i_N.out (No such file or directory)
I do not want the file, only to work on the path of that file.
Same way as any other command, either of these (or other alternatives, e.g. within "here-documents" or passed as awk variables or...):
printf '%s\n' "$a" | command
command <<<"$a"
i have a variable Firstline with value FHEAD,0000000001,STKU,20150927000000,201509270000000000,1153,,0000000801,W from which i need 5th field alone.
Can any one help me to resolve this.
I have used the below command but it is giving me an error
echo "FHEAD,0000000001,STKU,20150927000000,201509270000000000,1153,,0000000801,W" | awk -f ',' '{print $5}'
awk: fatal: can't open source file
,' for reading (No such file or directory)
As you tag it as bash and not awk (which is also a valid solution), you can do
IFS=, read -a a <<< "FHEAD,0000000001,STKU,20150927000000,201509270000000000,1153,,0000000801,W"
echo ${a[4]}
to obtain the same result without spawning a new process (note that bash arrays are 0-based).
Try -F not -f.
-F is for the field separator
-f is for the filename of the awk program.
You can use sed too
echo "..." | sed -E 's/([^,]*,){4}([^,]*).*/\2/'
I am trying to use one or two lines of Bash (that can be run in a command line) to read a folder-name and return the version inside of the name.
So if I have myfolder_v1.0.13 I know that I can use echo "myfolder_v1.0.13" | awk -F"v" '{ print $2 }' and it will return with 1.0.13.
But how do I get the shell to read the folder name and pipe with the awk command to give me the same result without using echo? I suppose I could always navigate to the directory and translate the output of pwd into a variable somehow?
Thanks in advance.
Edit: As soon as I asked I figured it out. I can use
result=${PWD##*/}; echo $result | awk -F"v" '{ print $2 }'
and it gives me what I want. I will leave this question up for others to reference unless someone wants me to take it down.
But you don't need an Awk at all, here just use bash parameter expansion.
string="myfolder_v1.0.13"
printf "%s\n" "${string##*v}"
1.0.13
You can use
basename "$(cd "foldername" ; pwd )" | awk -Fv '{print $2}'
to get the shell to give you the directory name, but if you really want to use the shell, you could also avoid the use of awk completetly:
Assuming you have the path to the folder with the version number in the parameter "FOLDERNAME":
echo "${FOLDERNAME##*v}"
This removes the longest prefix matching the glob expression "*v" in the value of the parameter FOLDERNAME.
I have a CSV list that is two columns (col1 is Share Name, col2 is file system path). I need two variables for either everything BEFORE the comma, or everything AFTER the column. My issue is that either column potentially has spaces, and even though these are quoted in the output, my script isn't handling them properly.
CSV:
ShareName,/path/to/sharename
"Share with spaces",/path/to/sharewithspaces
ShareWithSpace,"/path/to/share with spaces"
I was using this awk statement to get either field 1 or field 2:
echo $line | awk -F "\"*,\"*" '{print $2}'
BUT, I soon realized that it wasn't handling the spaces properly, even when passing that command to a variable and quoting the variable.
So, then after googling my brain out, I was trying this:
echo $line | cut -d, -f2
Which works, EXCEPT when echoing the variable $line. If I echo the string, it works perfectly, but unfortunately I'm using this in a while/read/do.
I am fairly certain my issue is having to define fields and having whitespace, but I really only need before or after a comma.
Here's the stripped down version so there's no sensitive data.
#!/usr/bin/bash
ssh <ip> <command> > "2_shares.txt"
<command> > "1_shares.txt"
file1="1_shares.txt"
file2="2_shares.txt"
while read -r line
do
share=`echo "$line" | awk -F "\"*,\"*" '{print $1}'`
path=`echo "$line" | awk -F "\"*,\"*" '{print $2}'`
if grep "$path" $file2 > /dev/null;
then
:
else
echo "SHARE NEEDS CREATED FOR $line"
case $path in
*)
blah blah blah
;;
esac
fi
done < "$file1"
You could simply do like this,
awk -F',' '{print $2}' file
To skip the first line.
awk -F',' 'NR>1{print $2}' file
Your issue is simply that you aren't quoting your shell variables. ALWAYS quote shell variables unless you have a very specific reason not to and are fully aware of all of the consequences.
I strongly suspect the rest of your script is completely wrong in it's approach since you apparently didn't know to quote variables and are talking about shell loops and echoing one line at time to awk so please do post a followup question if you'd like help.
How can I pass the output of awk to a for file in loop?
for file in awk '{print $2}' my_file; do echo $file done;
my_file contains the name of the files whose name should be displayed (echoed).
I get just a
>
instead of my normal prompt.
Use backticks or $(...) to substitute the output of a command:
for file in $(awk '{print $2}' my_file)
do
echo "$file"
done
for file in $(awk '{print $2}' my_file); do echo "$file"; done
The notation to use is $(...) or Command Substitution.
for file in $(awk '{print $2}' my_file)
do
echo $file
done
Where I assume that you do more in the body of the loop than just echo since you could then leave the loop out altogether:
awk '{print $2}' my_file
Or, if you miss typing semicolons and don't like to spread code over multiple lines for readability, then you can use:
for file in $(awk '{print $2}' my_file); do echo $file; done
You will also find in (mostly older) code the backticks used:
for file in `awk '{print $2}' my_file`
do
echo $file
done
Quite apart from being difficult to use in the Markdown used to format comments (and questions and answers) on Stack Overflow, the backticks are not as friendly, especially when nested, so you should recognize them and understand them but not use them.
Incidentally, the reason you got the > prompt is that this command line:
for file in awk '{print $2}' my_file; do echo $file done;
is missing a semicolon before the done. The shell was still waiting for the done. Had you typed done and return, you would have seen the output:
awk done
{print $2} done
my_file done
Using backticks or $(awk ...) for command substitution is an acceptable solution for a small number of files; however, consider using xargs for single commands or pipes or a simple while read ... for more complex tasks (but it will work for simple ones too)
awk '...' |while read FILENAME; do
#do work with each file here using $FILENAME
done
This will allow processing to be done as each filename is processed instead of having to wait for the whole awk script to complete and allow for a larger set of filenames (you can only give so many args to a for x in ...; do) This will typically speed up your scripts and allow the same kinds of operations you would get in a for in loop without its limitations.