How do I assign a command to run a file into a variable.
I have this line:
file=$(./file1.sh) #how to properly do this?
if ($0 == $file)
echo "good to go!"
The goal of $file is to check the name of the shell script being run.
file should equal the command you use to run the shell script (ie "./file.sh")
How do I properly do this?
Close ..
file="file1.sh"
if [[ "${0##*/}" == "$file" ]]; then
echo "good to go!"
fi
Related
cat sample.sh
a=$1
if [ "$a" = "Dhinakaran Ramu" ];then
echo "Present"
else
echo "Not Present"
fi
sample.sh Dhinakaran Ramu
Answer is "Not Present"
Make sure when you run the script, send the text in quotation marks:
$ ./sample.sh "Dhinakaran Ramu"
It worked executing
sample.sh "Dhinakaran Ramu"
Note: if we execute
sample.sh Dhinakaran Ramu
then the script receives not one argument, but two. In the script you use $1, because you expect one argument.
When you run the script $1 is Dhinakaran and $2 is Ramu.
Try
sample.sh "Dhinakaran Ramu"
I am working on the shell script to read config properties from a .properties files, below is the sample config
RCTP_servername=test1
RCTP_databasename=test2
RCTP_portnumber=test3
RCTP_username=test4
RCTP_password=test5
i have written a shell script as below but it doesn't work could anyone please have look and guide me how to solve this
#building the keys based on environment
environment=RCTP
servername_key="$environment"_servername
databasename_key="$environment"_databasename
portnumber_key="$environment"_portnumber
username_key="$environment"_username
password_key="$environment"_username
#read the config.properties files
file=serverconfig.properties
if [ -f "$file" ]
then
echo "$file found."
while IFS='=' read -r key value
do
key=$(echo $key )
eval "${key}='${value}'"
done < "$file"
servername_value=${servername_key}
databasename_value=${databasename_key}
portnumber_value=${portnumber_key}
username_value=${username_key}
password_value=${password_key}
else
echo "$file not found."
fi
echo "$servername_value"
but am getting an below when i tried to run it, the error is ./test_script_fte.sh: line 23: ${servername_key}: bad substitution
The expected output is when echo $servername_value executed is test1
Though eval is not recommended most of the time, here is a solution that uses indirect-reference as
echo "${!servername_value}"
I have also tweaked the logic to source the properties file over using eval from your logic. With the complete script as below.
#!/bin/bash
#building the keys based on environment
environment=RCTP
servername_key="$environment"_servername
databasename_key="$environment"_databasename
portnumber_key="$environment"_portnumber
username_key="$environment"_username
password_key="$environment"_username
#read the config.properties files
file=serverconfig.properties
if [ -f "$file" ]
then
echo "$file found."
# sourcing the properties file in the current shell to fetch the values
source "$file"
servername_value=${servername_key}
databasename_value=${databasename_key}
portnumber_value=${portnumber_key}
username_value=${username_key}
password_value=${password_key}
else
echo "$file not found."
fi
echo "${!servername_value}"
echo "${!databasename_value}"
echo "${!portnumber_value}"
echo "${!username_value}"
echo "${!password_value}"
It seems you want to use the value of a variable as the name of another variable.
please replace your last line with followings
eval echo \"\$$servername_value\"
I am trying to execute a hallo_word.sh that is stored at ~/bin from this script that is stored at my ~/Desktop. I have made both scripts executable. But all the time I get the problem message. Any ideas?
#!/bin/sh
clear
dir="$PATH"
read -p "which file you want to execute" fl
echo ""
for fl in $dir
do
if [ -x "$fl" ]
then
echo "executing=====>"
./$fl
else
echo "Problem"
fi
done
This line has two problems:
for fl in $dir
$PATH is colon separated, but for expects whitespace separated values. You can change that by setting the IFS variable. This changes the FIELD SEPARATOR used by tools like for and awk.
$fl contains the name of the file you want to execute, but you overwrite its value with the contents of $dir.
Fixed:
#!/bin/sh
clear
read -p "which file you want to execute" file
echo
IFS=:
for dir in $PATH ; do
if [ -x "$dir/$file" ]
then
echo "executing $dir/$file"
exec "$dir/$file"
fi
done
echo "Problem"
You could also be lazy and let a subshell handle it.
PATH=(whatever) bash command -v my_command
if [ $? -ne 0 ]; then
# Problem, could not be found.
else
# No problem
fi
There is no need to over-complicate things.
command(1) is a builtin command that allows you to check if a command exists.
The PATH value contains all the directories in which executable files can be run without explicit qualification. So you can just call the command directly.
#!/bin/sh
clear
# r for raw input, e to use readline, add a space for clarity
read -rep "Which file you want to execute? " fl || exit 1
echo ""
"$fl" || { echo "Problem" ; exit 1 ; }
I quote the name as it could have spaces.
To test if the command exists before execution use type -p
#!/bin/sh
clear
# r for raw input, e to use readline, add a space for clarity
read -rep "Which file you want to execute? " fl || exit 1
echo ""
type -p "$fq" >/dev/null || exit 1
"$fl" || { echo "Problem" ; exit 1 ; }
How do I check if file exists in bash?
When I try to do it like this:
FILE1="${#:$OPTIND:1}"
if [ ! -e "$FILE1" ]
then
echo "requested file doesn't exist" >&2
exit 1
elif
<more code follows>
I always get following output:
requested file doesn't exist
The program is used like this:
script.sh [-g] [-p] [-r FUNCTION_ID|-d FUNCTION_ID] FILE
Any ideas please?
I will be glad for any help.
P.S. I wish I could show the entire file without the risk of being fired from school for having a duplicate. If there is a private method of communication I will happily oblige.
My mistake. Fas forcing a binary file into a wrong place. Thanks for everyone's help.
Little trick to debugging problems like this. Add these lines to the top of your script:
export PS4="\$LINENO: "
set -xv
The set -xv will print out each line before it is executed, and then the line once the shell interpolates variables, etc. The $PS4 is the prompt used by set -xv. This will print the line number of the shell script as it executes. You'll be able to follow what is going on and where you may have problems.
Here's an example of a test script:
#! /bin/bash
export PS4="\$LINENO: "
set -xv
FILE1="${#:$OPTIND:1}" # Line 6
if [ ! -e "$FILE1" ] # Line 7
then
echo "requested file doesn't exist" >&2
exit 1
else
echo "Found File $FILE1" # Line 12
fi
And here's what I get when I run it:
$ ./test.sh .profile
FILE1="${#:$OPTIND:1}"
6: FILE1=.profile
if [ ! -e "$FILE1" ]
then
echo "requested file doesn't exist" >&2
exit 1
else
echo "Found File $FILE1"
fi
7: [ ! -e .profile ]
12: echo 'Found File .profile'
Found File .profile
Here, I can see that I set $FILE1 to .profile, and that my script understood that ${#:$OPTIND:1}. The best thing about this is that it works on all shells down to the original Bourne shell. That means if you aren't running Bash as you think you might be, you'll see where your script is failing, and maybe fix the issue.
I suspect you might not be running your script in Bash. Did you put #! /bin/bash on the top?
script.sh [-g] [-p] [-r FUNCTION_ID|-d FUNCTION_ID] FILE
You may want to use getopts to parse your parameters:
#! /bin/bash
USAGE=" Usage:
script.sh [-g] [-p] [-r FUNCTION_ID|-d FUNCTION_ID] FILE
"
while getopts gpr:d: option
do
case $option in
g) g_opt=1;;
p) p_opt=1;;
r) rfunction_id="$OPTARG";;
d) dfunction_id="$OPTARG";;
[?])
echo "Invalid Usage" 1>&2
echo "$USAGE" 1>&2
exit 2
;;
esac
done
if [[ -n $rfunction_id && -n $dfunction_id ]]
then
echo "Invalid Usage: You can't specify both -r and -d" 1>&2
echo "$USAGE" >2&
exit 2
fi
shift $(($OPTIND - 1))
[[ -n $g_opt ]] && echo "-g was set"
[[ -n $p_opt ]] && echo "-p was set"
[[ -n $rfunction_id ]] && echo "-r was set to $rfunction_id"
[[ -n $dfunction_id ]] && echo "-d was set to $dfunction_id"
[[ -n $1 ]] && echo "File is $1"
To (recap) and add to #DavidW.'s excellent answer:
Check the shebang line (first line) of your script to ensure that it's executed by bash: is it #!/bin/bash or #!/usr/bin/env bash?
Inspect your script file for hidden control characters (such as \r) that can result in unexpected behavior; run cat -v scriptFile | fgrep ^ - it should produce NO output; if the file does contain \r chars., they would show as ^M.
To remove the \r instances (more accurately, to convert Windows-style \r\n newline sequences to Unix \n-only sequences), you can use dos2unix file to convert in place; if you don't have this utility, you can use sed 's/'$'\r''$//' file > outfile (CAVEAT: use a DIFFERENT output file, otherwise you'll destroy your input file); to remove all \r instances (even if not followed by \n), use tr -d '\r' < file > outfile (CAVEAT: use a DIFFERENT output file, otherwise you'll destroy your input file).
In addition to #DavidW.'s great debugging technique, you can add the following to visually inspect all arguments passed to your script:
i=0; for a; do echo "\$$((i+=1))=[$a]"; done
(The purpose of enclosing the value in [...] (for example), is to see the exact boundaries of the values.)
This will yield something like:
$1=[-g]
$2=[input.txt]
...
Note, though, that nothing at all is printed if no arguments were passed.
Try to print FILE1 to see if it has the value you want, if it is not the problem, here is a simple script (site below):
#!/bin/bash
file="${#:$OPTIND:1}"
if [ -f "$file" ]
then
echo "$file found."
else
echo "$file not found."
fi
http://www.cyberciti.biz/faq/unix-linux-test-existence-of-file-in-bash/
Instead of plucking an item out of "$#" in a tricky way, why don't you shift off the args you've processed with getopts:
while getopts ...
done
shift $(( OPTIND - 1 ))
FILE1=$1
I want a script to run a main.ksh to run both one.ksh and second.ksh only if output of one.ksh matches "1". So if the output is anything other than "1" then second.ksh shouyld not run.
cat one.ksh
#!/usr/bin/ksh
echo "1"
cat second.ksh
#!/usr/bin/ksh
echo "2"
I did this:
#!/usr/bin/ksh
ksh .ksh > one.txt
file="one.txt"
while read line
do
if [ $line -eq 2 ] ;then
ksh second.ksh
else
echo "one.ksh is no good"
fi
done <"$file"
Any better way ro this is good?
Instead of echo 1 to proceed from the first script, you should use exit 0. If it shouldn't proceed, exit 1.
This is the standard way of signaling success and failure in Unix.
Once you do this, you can use any of:
first.ksh && second.ksh
or
if first.ksh
then
second.ksh
fi
or
set -e # Automatically exit script if a command fails
first.ksh
second.ksh
out=`one.ksh`
if [ "x$out" == "x1" ]; then
second.ksh
fi