I'm using a shell script in git bash to call sqlcmd to run some sql scripts. The script names are based on the git branch name, so the command is sqlcmd -E -S mySQLServer -d myDB "$branchsql"
It works fine from the command line, but I want to repeat it for several git branches, so I have a script that calls this script for a list of branches:
While read branch
do
. C:/sqlScript.sh $branch
done < "$1"
The file with the list of branches is passed in $1
What happens is that is reads the first branch from the list, but never moves on to the next one. It repeatedly executes sqlScript.sh with the same value in $branch.
If I change sqlScript to just echo $1, everything works as expected. When I call sqlcmd, the first branch only is passed.
So why does sqlcmd mess things up ?
Drop the leading dot, invoke C:/sqlScript.sh ... rather than . C:/sqlScript.sh ...
. scripts.sh is short for source script.sh ,: it will execute the commands listed in script.sh in the current shell. If you have a command such as exit it will exit the current shell.
A regular invocation will start a separate shell, which won't mess with your current one.
Related
I'm working with the githooks pre-commit.sample that ships with my version on OSX (git version 2.24.3 (Apple Git-128)). There is something peculiar in the code, namely to do with a seemingly spurious exec.
The pre-commit sample contains the following code (irrelevant lines/blocks removed):
#!/bin/sh
against=HEAD
# Redirect output to stderr.
exec 1>&2
# If there are whitespace errors, print the offending file names and fail.
exec git diff-index --check --cached $against --
If I attempt to modify this code by appending validation after the last exec call, it never runs. Per a relevant AskUbuntu post, I understand what it is about exec that makes this happen.
However, what I don't understand is why the exec needs to happen in the first place. This line has the hook fail if there's trailing whitespace, but it appears to behave the same if I remove the exec and just directly call git diff-index ....
In other words, this:
git diff-index --check --cached $against --
...appears to behave like this:
exec git diff-index --check --cached $against --
...except the latter seems more restrictive. I can't find a difference between this file with or without the exec, except that the exec makes it so that the whitespace checking has to happen last.
Why would the sample creators choose theexec option, when it appears to behave the same as the ostensibly less restrictive direct call?
This could be a (perhaps misguided) attempt at being more efficient.
In general, in shell scripts, the return value from the script is that of the last command run, as Ronald noted in a comment. So:
#! /bin/sh
cmd1
cmd2
cmd3
exit $?
is just a long-winded / explicit way of doing:
#! /bin/sh
cmd1
cmd2
cmd3
The general rule here is that the shell takes each "pipeline"—a pipeline being defined as a series of commands with | symbols, which pipe to each other—and runs that pipeline within a fork-then-exec of the main shell process. So:
cmd1 | cmd2
cmd3
causes the main shell to fork once to run cmd1 | cmd2 (which, internally, requires another fork for each of the two commands), then fork again to run cmd3. Then, having run out of commands, the shell would exit with $?—the last pipeline's exit status—as its own status.
Adding redirections, such as:
cmd1 | cmd2 > file
"means" that the shell should fork, then run the pipeline cmd1 | cmd2 with its output redirected to that file. Of course cmd1's output is already redirected to cmd2's input so only cmd2's output is affected here—but we can see that cmd3's output is not redirected, so clearly, the redirection did not happen at the shell level, but rather within the sub-shell it forked to run the pipeline.1
What the exec keyword does is, in effect, prevent the fork. That is:
exec cmd > out
has the redirection take place in the top level shell, which then runs the given command with an exec system call without first calling fork. This replaces the shell with the command that is run (but hangs on to the process ID and all open file descriptors, until the command that is run here finishes).
If we leave out the command itself, we get:
exec >out
which means that no command gets run, but the redirection takes place in the shell itself rather than in some sub-shell. So now every subsequent command, which does get a fork-and-exec, has its output sent to file out.
We see something like that in your own script:
exec 1>&2
which forces all subsequent commands' stdout to go to the same file descriptor as stderr.
Oddly, there's then only one subsequent command, which means that if the goal was efficiency, they could have used:
exec git diff-index --check --cached $against -- 1>&2
to put everything on a single line.
1In practice, shells actually do the file opening early, and have to do a whole lot of fancy footwork to shuffle file descriptors around between the fork and exec calls. With POSIX style job control, it's even worse: the shell has to do a lot of signal-directing work, making process groups, and so on. Writing a shell is hard, and as the V8 Unix and Plan 9 guys saw it, this meant that the overall OS design needed some reworking.
Exit status in general
As you noted in a reply:
Hence, if I have validation after a non-execed command, I'd need to make sure check for a non-0 result from the git diff-index.
Yes. Note that shells in general (and /bin/sh in particular) have interesting flags that you can set from the command or #! line, or with the set command. One of these flags is the e flag, which makes the shell exit if a command has a non-zero exit code:2
#! /bin/sh -e
cmd1
cmd2
cmd3
is roughly equivalent to:
#! /bin/sh
cmd1 || exit
cmd2 || exit
cmd3
(we don't need the || exit on the last one, although we could use it harmlessly). The -e flag is often a good idea.
2Note that tested commands do not make the shell exit immediately, so that we can write:
if grep ...; then
thing to run when regexp is found
else
thing to run when regexp is not found
fi
There was a bug in some early versions of /bin/sh where this didn't work right: I remember fixing it, then discovering I'd either over-fixed or under-fixed it for cases like a && b || c and having to re-fix it.
In a bash script I want to get the name of the last command executed in terminal and store it in the variable for later use. I know that !:0 doesn't work in bash script, and I'm looking for some replacement of it.
For example:
#user enters pwd
> pwd
/home/paul
#I call my script and it show the last command
> ./last_command
pwd
this didn't help, it just prints empty line.
getting last executed command from script
Tell the shell to continuously append commands to the history file:
export PROMPT_COMMAND="history -a"
Put the following into your script:
#!/bin/bash
echo "Your command was:"
tail -n 1 ~/.bash_history
as far as I benefit the working one in my .bashrc;
export HISTCONTROL=ignoredups:erasedups
then do this, on console or in script respectively
history 2
cm=$(history 1)
i have a Shell script doing the following two commands, connecting to a remote server and putting files via SFTP, lets called it "execute.sh"
sftp -b /usr/local/outbox/send.sh username#example.com
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
Then in my "send.sh" i have following commands to be executed on the remote server.
cd ExampleFolder/outbox
put Files_*
bye
Now my problem is
If the first command "sftp -b" fails due to a remote connection error some network problem, it still moves the files into the "completed folder" which is incorrect, so i want some way to do the next command "mv" to be executed only if the first command to "sftp" is successfully connected.
Can we do this by enhancing this shell script ? or some work around ?
My Shell is Bash.
Simply insert && between the two commands:
sftp -b /usr/local/outbox/send.sh username#example.com && \
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
If the first fails, the second one will not run.
Alternatively, you can check the exit code of the first command explicitly. The exit code of the last command is always saved in $?, and it is 0 if the command succeeded:
sftp -b /usr/local/outbox/send.sh username#example.com
if [ $? -eq 0 ]
then
mv /usr/local/outbox/DD* /usr/local/outbox/completed/
fi
If you really wanted to capture the output of the first command, you could run it in $(...) and store the value in a variable:
sftpOutput="$(sftp -b /usr/local/outbox/send.sh username#example.com)"
and then use this variable in further checks, e.g. match it against a pattern in the next if.
I have two shell scripts, a.sh and b.sh.
How can I call b.sh from within the shell script a.sh?
There are a couple of different ways you can do this:
Make the other script executable with chmod a+x /path/to/file(Nathan Lilienthal's comment), add the #!/bin/bash line (called shebang) at the top, and the path where the file is to the $PATH environment variable. Then you can call it as a normal command;
Or call it with the source command (which is an alias for .), like this:
source /path/to/script
Or use the bash command to execute it, like:
/bin/bash /path/to/script
The first and third approaches execute the script as another process, so variables and functions in the other script will not be accessible.
The second approach executes the script in the first script's process, and pulls in variables and functions from the other script (so they are usable from the calling script).
In the second method, if you are using exit in second script, it will exit the first script as well. Which will not happen in first and third methods.
Check this out.
#!/bin/bash
echo "This script is about to run another script."
sh ./script.sh
echo "This script has just run another script."
There are a couple of ways you can do this. Terminal to execute the script:
#!/bin/bash
SCRIPT_PATH="/path/to/script.sh"
# Here you execute your script
"$SCRIPT_PATH"
# or
. "$SCRIPT_PATH"
# or
source "$SCRIPT_PATH"
# or
bash "$SCRIPT_PATH"
# or
eval '"$SCRIPT_PATH"'
# or
OUTPUT=$("$SCRIPT_PATH")
echo $OUTPUT
# or
OUTPUT=`"$SCRIPT_PATH"`
echo $OUTPUT
# or
("$SCRIPT_PATH")
# or
(exec "$SCRIPT_PATH")
All this is correct for the path with spaces!!!
The answer which I was looking for:
( exec "path/to/script" )
As mentioned, exec replaces the shell without creating a new process. However, we can put it in a subshell, which is done using the parantheses.
EDIT:
Actually ( "path/to/script" ) is enough.
If you have another file in same directory, you can either do:
bash another_script.sh
or
source another_script.sh
or
. another_script.sh
When you use bash instead of source, the script cannot alter environment of the parent script. The . command is POSIX standard while source command is a more readable bash synonym for . (I prefer source over .). If your script resides elsewhere just provide path to that script. Both relative as well as full path should work.
Depends on.
Briefly...
If you want load variables on current console and execute you may use source myshellfile.sh on your code. Example:
#!/bin/bash
set -x
echo "This is an example of run another INTO this session."
source my_lib_of_variables_and_functions.sh
echo "The function internal_function() is defined into my lib."
returned_value=internal_function()
echo $this_is_an_internal_variable
set +x
If you just want to execute a file and the only thing intersting for you is the result, you can do:
#!/bin/bash
set -x
./executing_only.sh
bash i_can_execute_this_way_too.sh
bash or_this_way.sh
set +x
You can use /bin/sh to call or execute another script (via your actual script):
# cat showdate.sh
#!/bin/bash
echo "Date is: `date`"
# cat mainscript.sh
#!/bin/bash
echo "You are login as: `whoami`"
echo "`/bin/sh ./showdate.sh`" # exact path for the script file
The output would be:
# ./mainscript.sh
You are login as: root
Date is: Thu Oct 17 02:56:36 EDT 2013
First you have to include the file you call:
#!/bin/bash
. includes/included_file.sh
then you call your function like this:
#!/bin/bash
my_called_function
Simple source will help you.
For Ex.
#!/bin/bash
echo "My shell_1"
source my_script1.sh
echo "Back in shell_1"
Just add in a line whatever you would have typed in a terminal to execute the script!
e.g.:
#!bin/bash
./myscript.sh &
if the script to be executed is not in same directory, just use the complete path of the script.
e.g.:`/home/user/script-directory/./myscript.sh &
This was what worked for me, this is the content of the main sh script that executes the other one.
#!/bin/bash
source /path/to/other.sh
The top answer suggests adding #!/bin/bash line to the first line of the sub-script being called. But even if you add the shebang, it is much faster* to run a script in a sub-shell and capture the output:
$(source SCRIPT_NAME)
This works when you want to keep running the same interpreter (e.g. from bash to another bash script) and ensures that the shebang line of the sub-script is not executed.
For example:
#!/bin/bash
SUB_SCRIPT=$(mktemp)
echo "#!/bin/bash" > $SUB_SCRIPT
echo 'echo $1' >> $SUB_SCRIPT
chmod +x $SUB_SCRIPT
if [[ $1 == "--source" ]]; then
for X in $(seq 100); do
MODE=$(source $SUB_SCRIPT "source on")
done
else
for X in $(seq 100); do
MODE=$($SUB_SCRIPT "source off")
done
fi
echo $MODE
rm $SUB_SCRIPT
Output:
~ ❯❯❯ time ./test.sh
source off
./test.sh 0.15s user 0.16s system 87% cpu 0.360 total
~ ❯❯❯ time ./test.sh --source
source on
./test.sh --source 0.05s user 0.06s system 95% cpu 0.114 total
* For example when virus or security tools are running on a device it might take an extra 100ms to exec a new process.
pathToShell="/home/praveen/"
chmod a+x $pathToShell"myShell.sh"
sh $pathToShell"myShell.sh"
#!/bin/bash
# Here you define the absolute path of your script
scriptPath="/home/user/pathScript/"
# Name of your script
scriptName="myscript.sh"
# Here you execute your script
$scriptPath/$scriptName
# Result of script execution
result=$?
chmod a+x /path/to/file-to-be-executed
That was the only thing I needed. Once the script to be executed is made executable like this, you (at least in my case) don't need any other extra operation like sh or ./ while you are calling the script.
Thanks to the comment of #Nathan Lilienthal
Assume the new file is "/home/satya/app/app_specific_env" and the file contents are as follows
#!bin/bash
export FAV_NUMBER="2211"
Append this file reference to ~/.bashrc file
source /home/satya/app/app_specific_env
When ever you restart the machine or relogin, try echo $FAV_NUMBER in the terminal. It will output the value.
Just in case if you want to see the effect right away, source ~/.bashrc in the command line.
There are some problems to import functions from other file.
First: You needn't to do this file executable. Better not to do so!
just add
. file
to import all functions. And all of them will be as if they are defined in your file.
Second: You may be define the function with the same name. It will be overwritten. It's bad. You may declare like that
declare -f new_function_name=old_function_name
and only after that do import.
So you may call old function by new name.
Third: You may import only full list of functions defined in file.
If some not needed you may unset them. But if you rewrite your functions after unset they will be lost. But if you set reference to it as described above you may restore after unset with the same name.
Finally In common procedure of import is dangerous and not so simple. Be careful! You may write script to do this more easier and safe.
If you use only part of functions(not all) better split them in different files. Unfortunately this technique not made well in bash. In python for example and some other script languages it's easy and safe. Possible to make partial import only needed functions with its own names. We all want that in next bush versions will be done the same functionality. But now We must write many additional cod so as to do what you want.
Use backticks.
$ ./script-that-consumes-argument.sh `sh script-that-produces-argument.sh`
Then fetch the output of the producer script as an argument on the consumer script.
I'm using this bash script:
for a in `sort -u $HADOOP_HOME/conf/slaves`; do
rsync -e ssh -a "${HADOOP_HOME}/conf" ${a}:"${HADOOP_HOME}"
done
for a in `sort -u $HBASE_HOME/conf/regionservers`; do
rsync -e ssh -a "${HBASE_HOME}/conf" ${a}:"${HBASE_HOME}"
done
When I call this script directly from shell, there are no problems and it works fine. But when I call this script from another script, although the script does its job, I get this message at the end:
sort: open failed: /conf/slaves: No such file or directory
sort: open failed: /conf/regionservers: No such file or directory
I have set $HADOOP_HOME and $HBASE_HOME in /etc/profile and the script does the job right. But I don't understand why it gives this message in the end.
Are you sure it's doing it right? When you call this script from the shell it is acting as an interactive shell which reads and sources /etc/profile and ~/.bash_profile if it exists. When you call it from another script it is running as non-interactive and wont source those files. If you want a non-interactive shell to source a file you can do this by setting the BASH_ENV environment variable.
#!/bin/bash
export BASH_ENV=/etc/profile
./call/to/your/HADOOP/script.sh
Everything points to those variables not being defined when your script runs.
You should ensure that they are set for your script. Before the first loop, place the line:
echo "[${HADOOP_HOME}] [${HBASE_HOME}]"
and make sure that doesn't output "[] []" (or even one "[]").
Additionally, put a set +x line at the top of the script - this will output lines before executing them and you can see what's being done.
Keep in mind that some shells don't pass on environment variables to subshells unless you explicitly export them (setting them is not enough).