how to take inputs as an equality in shell script (bash) - bash

I am working on my project and it requires a shellscript that will rename all the files
in a directory.The first argument is a base
name, second argument is a file extension. If it is run as:
./myprog4.sh BASE=Birthday EXT=jpg
then the resulting files should have names like:
Birthday001.jpg, Birthday002.jpg, Birthday003.jpg, etc.
But I couldn't take the inputs as an equality like BASE=$1.
Normally when I take the inputs while executing the script file I write something like:
base=$1
extension=$2
What should I do?

First, know that you can set values in the environment of your program by prefixing the command with the variable assignments:
BASE=Birthday EXT=jpg ./myprog4.sh
When myprog4.sh starts, it will see BASE and EXT with the given values. After myprog4.sh exists, BASE and EXT retain their old values (or remain unset, as the case may be).
bash does allow you to call your program the way you are trying, with the -k option. From the man page:
-k All arguments in the form of assignment statements are placed in the environment for a command, not just those that precede the command name.
To use it, you would need to use the set command to enable this option before you call myprog4.sh.
$ set -k
$ ./myprog4.sh BASE=Birthday EXT=jpg

It's similar how you read $1 and $2 into variables but extract only the necessary parts:
base=$(echo $1 | cut -d'=' -f2)
extension=$(echo $2 | cut -d'=' -f2)
In bash, you can also do:
base=$(cut -d'=' -f2 <<<$1)
extension=$(cut -d'=' -f2 <<<$2)

If all your arguments are passed in form of name=value, then you can do do the following in the beginning of your myprog4.sh script.
while [ "$1" != "" ]; do
expr $1
shift
done
When you call ./myprog4.sh BASE=Birthday EXT=jpg your script will be able to do echo $BASE and echo $EXT.

Related

Bash File names will not append to file from script

Hello I am trying to get all files with Jane's name to a separate file called oldFiles.txt. In a directory called "data" I am reading from a list of file names from a file called list.txt, from which I put all the file names containing the name Jane into the files variable. Then I'm trying to test the files variable with the files in list.txt to ensure they are in the file system, then append the all the files containing jane to the oldFiles.txt file(which will be in the scripts directory), after it tests to make sure the item within the files variable passes.
#!/bin/bash
> oldFiles.txt
files= grep " jane " ../data/list.txt | cut -d' ' -f 3
if test -e ~data/$files; then
for file in $files; do
if test -e ~/scripts/$file; then
echo $file>> oldFiles.txt
else
echo "no files"
fi
done
fi
The above code gets the desired files and displays them correctly, as well as creates the oldFiles.txt file, but when I open the file after running the script I find that nothing was appended to the file. I tried changing the file assignment to a pointer instead files= grep " jane " ../data/list.txt | cut -d' ' -f 3 ---> files=$(grep " jane " ../data/list.txt) to see if that would help by just capturing raw data to write to file, but then the error comes up "too many arguments on line 5" which is the 1st if test statement. The only way I get the script to work semi-properly is when I do ./findJane.sh > oldFiles.txt on the shell command line, which is me essentially manually creating the file. How would I go about this so that I create oldFiles.txt and append to the oldFiles.txt all within the script?
The biggest problem you have is matching names like "jane" or "Jane's", etc. while not matching "Janes". grep provides the options -i (case insensitive match) and -w (whole-word match) which can tailor your search to what you appear to want without having to use the kludge (" jane ") of appending spaces before an after your search term. (to properly do that you would use [[:space:]]jane[[:space:]])
You also have the problem of what is your "script dir" if you call your script from a directory other than the one containing your script, such as calling your script from your $HOME directory with bash script/findJane.sh. In that case your script will attempt to append to $HOME/oldFiles.txt. The positional parameter $0 always contains the full pathname to the current script being run, so you can capture the script directory no matter where you call the script from with:
dirname "$0"
You are using bash, so store all the filenames resulting from your grep command in an array, not some general variable (especially since your use of " jane " suggests that your filenames contain whitespace)
You can make your script much more flexible if you take the information of your input file (e.g list.txt), the term to search for (e.g. "jane"), the location where to check for existence of the files (e.g. $HOME/data) and the output filename to append the names to (e.g. "oldFile.txt") as command line [positonal] parameters. You can give each default values so it behaves as you currently desire without providing any arguments.
Even with the additional scripting flexibility of taking the command line arguments, the script actually has fewer lines simply filling an array using mapfile (synonymous with readarray) and then looping over the contents of the array. You also avoid the additional subshell for dirname with a simple parameter expansion and test whether the path component is empty -- to replace with '.', up to you.
If I've understood your goal correctly, you can put all the pieces together with:
#!/bin/bash
# positional parameters
src="${1:-../data/list.txt}" # 1st param - input (default: ../data/list.txt)
term="${2:-jane}" # 2nd param - search term (default: jane)
data="${3:-$HOME/data}" # 3rd param - file location (defaut: ../data)
outfn="${4:-oldFiles.txt}" # 4th param - output (default: oldFiles.txt)
# save the path to the current script in script
script="$(dirname "$0")"
# if outfn not given, prepend path to script to outfn to output
# in script directory (if script called from elsewhere)
[ -z "$4" ] && outfn="$script/$outfn"
# split names w/term into array
# using the -iw option for case-insensitive whole-word match
mapfile -t files < <(grep -iw "$term" "$src" | cut -d' ' -f 3)
# loop over files array
for ((i=0; i<${#files[#]}; i++)); do
# test existence of file in data directory, redirect name to outfn
[ -e "$data/${files[i]}" ] && printf "%s\n" "${files[i]}" >> "$outfn"
done
(note: test expression and [ expression ] are synonymous, use what you like, though you may find [ expression ] a bit more readable)
(further note: "Janes" being plural is not considered the same as the singular -- adjust the grep expression as desired)
Example Use/Output
As was pointed out in the comment, without a sample of your input file, we cannot provide an exact test to confirm your desired behavior.
Let me know if you have questions.
As far as I can tell, this is what you're going for. This is totally a community effort based on the comments, catching your bugs. Obviously credit to Mark and Jetchisel for finding most of the issues. Notable changes:
Fixed $files to use command substitution
Fixed path to data/$file, assuming you have a directory at ~/data full of files
Fixed the test to not test for a string of files, but just the single file (also using -f to make sure it's a regular file)
Using double brackets — you could also use double quotes instead, but you explicitly have a Bash shebang so there's no harm in using Bash syntax
Adding a second message about not matching files, because there are two possible cases there; you may need to adapt depending on the output you're looking for
Removed the initial empty redirection — if you need to ensure that the file is clear before the rest of the script, then it should be added back, but if not, it's not doing any useful work
Changed the shebang to make sure you're using the user's preferred Bash, and added set -e because you should always add set -e
#!/usr/bin/env bash
set -e
files=$(grep " jane " ../data/list.txt | cut -d' ' -f 3)
for file in $files; do
if [[ -f $HOME/data/$file ]]; then
if [[ -f $HOME/scripts/$file ]]; then
echo "$file" >> oldFiles.txt
else
echo "no matching file"
fi
else
echo "no files"
fi
done

How to remove duplicate with bash script command xargs when the string has some quotes ""?

I am a newbie in bash script.
Here is my environment:
Mac OS X Catalina
/bin/bash
I found here a mix of several commands to remove the duplicate string in a string.
I needed for my program which updates the .zhrc profile file.
Here is my code:
#!/bin/bash
a='export PATH="/Library/Frameworks/Python.framework/Versions/3.8/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:"'
myvariable=$(echo "$a" | tr ':' '\n' | sort | uniq | xargs)
echo "myvariable : $myvariable"
Here is the output:
xargs: unterminated quote
myvariable :
After some test, I know that the source of the issue is due to some quotes "" inside my variable '$a'.
Why am I so sure?
Because when I execute this code for example:
#!/bin/bash
a="/Library/Java/JavaVirtualMachines/jdk1.8.0_271.jdk/Contents/Home:/Library/Java/JavaVirtualMachines/jdk1.8.0_271.jdk/Contents/Home"
myvariable=$(echo "$a" | tr ':' '\n' | sort | uniq | xargs)
echo "myvariable : $myvariable"
where $a doesn't contain any quotes, I get the correct output:
myvariable : /Library/Java/JavaVirtualMachines/jdk1.8.0_271.jdk/Contents/Home
I tried to search for a solution for "xargs: unterminated quote" but each answer found on the web is for a particular case which doesn't correspond to my problem.
As I am a newbie and this line command is using several complex commands, I was wondering if anyone know the magic trick to make it work.
Basically, you want to remove duplicates from a colon-separated list.
I don't know if this is considered cheating, but I would do this in another language and invoke it from bash. First I would write a script for this purpose in zsh: It accepts as parameter a string with colon separtors and outputs a colon-separated list with duplicates removed:
#!/bin/zsh
original=${1?Parameter missing} # Original string
# Auxiliary array, which is set up to act like a Set, i.e. without
# duplicates
typeset -aU nodups_array
# Split the original strings on the colons and store the pieces
# into the array, thereby removing duplicates. The core idea for
# this is stolen from:
# https://stackoverflow.com/questions/2930238/split-string-with-zsh-as-in-python
nodups_array=("${(#s/:/)original}")
# Join the array back with colons and write the resulting string
# to stdout.
echo ${(j':')nodups_array}
If we call this script nodups_string, you can invoke it in your bash-setting as:
#!/bin/bash
a_path="/Library/Frameworks/Python.framework/Versions/3.8/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:"
nodups_a_path=$(nodups_string "$a_path")
my_variable="export PATH=$nodups_a_path"
echo "myvariable : $myvariable"
The overall effect would be literally what you asked for. However, there is still an open problem I should point out: If one of the PATH components happens to contain a space, the resulting export statement can not validly be executed. This problem is also inherent into your original problem; you just didn't mention it. You could do something like
my_variable=export\ PATH='"'$nodups_a_path"'"'
to avoid this. Of course, I wonder why you take such an effort to generat a syntactically valid export command, instead of simply building the PATH by directly where it is needed.
Side note: If you would use zsh as your shell instead of bash, and only want to keep your PATH free of duplicates, a simple
typeset -iU path
would suffice, and zsh takes care of the rest.
With awk:
awk -v RS=[:\"] 'NR > 1 { pth[$0]="" } END { for (i in pth) { if (i !~ /[[:space:]]+/ && i != "" ) { printf "%s:",i } } }' <<< "$a"
Set the record separator to : and double quotes. Then when the number record is greater than one, set up an array called pth with the path as the index. At the end, loop through the array, re printing the paths separated with :

Create variable by combining text + another variable

Long story short, I'm trying to grep a value contained in the first column of a text file by using a variable.
Here's a sample of the script, with the grep command that doesn't work:
for ii in `cat list.txt`
do
grep '^$ii' >outfile.txt
done
Contents of list.txt :
123,"first product",description,20.456789
456,"second product",description,30.123456
789,"third product",description,40.123456
If I perform grep '^123' list.txt, it produces the correct output... Just the first line of list.txt.
If I try to use the variable (ie grep '^ii' list.txt) I get a "^ii command not found" error. I tried to combine text with the variable to get it to work:
VAR1= "'^"$ii"'"
but the VAR1 variable contained a carriage return after the $ii variable:
'^123
'
I've tried a laundry list of things to remove the cr/lr (ie sed & awk), but to no avail. There has to be an easier way to perform the grep command using the variable. I would prefer to stay with the grep command because it works perfectly when performing it manually.
You have things mixed in the command grep '^ii' list.txt. The character ^ is for the beginning of the line and a $ is for the value of a variable.
When you want to grep for 123 in the variable ii at the beginning of the line, use
ii="123"
grep "^$ii" list.txt
(You should use double quotes here)
Good moment for learning good habits: Continue in variable names in lowercase (well done) and use curly braces (don't harm and are needed in other cases) :
ii="123"
grep "^${ii}" list.txt
Now we both are forgetting something: Our grep will also match
1234,"4-digit product",description,11.1111. Include a , in the grep:
ii="123"
grep "^${ii}," list.txt
And how did you get the "^ii command not found" error ? I think you used backquotes (old way for nesting a command, better is echo "example: $(date)") and you wrote
grep `^ii` list.txt # wrong !
#!/bin/sh
# Read every character before the first comma into the variable ii.
while IFS=, read ii rest; do
# Echo the value of ii. If these values are what you want, you're done; no
# need for grep.
echo "ii = $ii"
# If you want to find something associated with these values in another
# file, however, you can grep the file for the values. Use double quotes so
# that the value of $ii is substituted in the argument to grep.
grep "^$ii" some_other_file.txt >outfile.txt
done <list.txt

Printing to multiple files with shell script for loop variable

None of the files are showing up, I tried putting the file to be written to in quotes but just no files are being written all I am getting is one file called person.txt
#!/bin/sh
cut -f 1 $1 > temp1.txt
cut -f 2-3 $2 > temp2.txt
for ((i=3;i<103;i++)); do
cut -f $i $1 > temp3.txt
paste temp1.txt temp2.txt temp3.txt > $HOME/Desktop/Plots/person$iPlot.txt
done
Without having tested it, a problem in your script is
$HOME/Desktop/Plots/person$iPlot.txt
Bash is not going to substitute your variable i as you are expecting, but tries to resolve a variable iPlot instead. As this variable has not been assigned any value, you'll end up with person.txt. This happens as bash tries to resolve variables as entire words separated e.g. by space or period.
To make sure that your variable i is used, try
$HOME/Desktop/Plots/person${i}Plot.txt
instead.

Shell file doesn't extract value properly [grep/cut] from file [bash]

I have a test.txt file which contains key value pair just like any other property file.
test.txt
Name="ABC"
Age="24"
Place="xyz"
i want to extract the value of different key's value into corresponding variables. For that i have written the following shell script
master.sh
file=test.txt
while read line; do
value1=`grep -i 'Name' $file|cut -f2 -d'=' $file`
value2=`grep -i 'Age' $file|cut -f2 -d'=' $file`
done <$file
but when i execute it; it doesnt run properly, giving me the entire line extracted by the grep part of the command as output. Can someone please point me to the error ?
If I understood your question correctly, the following Bash script should do the trick:
#!/bin/bash
IFS="="
while read k v ; do
test -z "$k" && continue # skip empty lines
declare $k=$v
done <test.txt
echo $Name
echo $Age
echo $Place
Why is that working? Most information can be retrieved from bash's man page:
IFS is the "Internal Field Separator" which is used by bash's 'read' command to separate fields in each line. By default, IFS separates along spaces, but it is redefined to separate along the equal sign. It is a bash-only solution similar to the 'cut' command, where you define the equal sign as delimiter ('-d =').
The 'read' builtin reads two fields from a line. As only two variables are provided (k and v), the first field ends up in k, all remaining fields (i.e. after the equal sign) end up in v.
As the comment states, empty lines are skipped, i.e. those where the k variable is emtpy (test -z).
'eval' is a bash builtin as well, which executes the arguments (but only after evaluating $k=$v), i.e. the eval statement becomes equivalent to Name="ABC" etc.
'<test.txt' after 'done' tells bash to read test.txt and to feed it line by line into the 'read' builtin further up.
The three 'echo' statements are simply to show that this solution did work.
The format or the file is valid sh syntax, so you could just source the file:
source test.txt
In any case, your code doesn't work because after the pipe you shouldn't specify the file again.
value1=$(grep -i 'Name' "$file" | cut -f2 -d'=')
would keep your logic
This is a comment, but the comment box does not allow formatting. Consider rewriting this:
while read line; do
value1=`grep -i 'Name' $file|cut -f2 -d'=' $file`
value2=`grep -i 'Age' $file|cut -f2 -d'=' $file`
done <$file
as:
while IFS== read key value; do
case $key in
Name|name) value1=$value;;
Age|age) value2=$value;;
esac;
done < $file
Parsing the line multiple times via cut is inefficient. This is slightly different than your version, since the comparison is case sensitive, but that is easily fixed if necessary. For example, you could preprocess the input file and convert everything to lower case. You can do the preprocessing on the fly, but be aware that this will put your while loop in a subprocess which will require some additional care (since the variable definitions will end with the pipeline), but that is not significant. But running the entire file through grep twice for each line of the file is O(n^2), and ghastly! (Why are you reading the entire file anyway instead of just echoing the line ?)

Resources