I'm wanting this script to print 1,2,3.. without the use of functions, just execute two.sh then carry on where it left off, is it possible?
[root#server:~]# cat testing.sh
#!/bin/bash
echo "1"
exec ./two.sh
echo "3"
[root#server:~]# cat two.sh
#!/bin/bash
echo "2"
return
exec, if you give it a program name a, will replace the current program with whatever you specify.
If you want to just run the script (in another process) and return, simply use:
./two.sh
to do that.
For this simple case, you can also execute the script in the context of the current process with:
. ./two.sh
That will not start up a new process but will have the side-effect of allowing two.sh to affect the current shell's environment. While that's not a problem for your current two.sh (since all it does is echo a line), it may be problematic for more complicated scripts (for example, those that set environment variables).
a Without a program name, it changes certain properties of the current program, such as:
exec >/dev/null
which simply starts sending all standard output to the bit bucket.
Sure, just run:
echo "1"
./two.sh
echo "3"
Related
Specifics:
I'm trying to build a bash script which needs to do a couple of things.
Firstly, it needs to run a third party script that I cannot manipulate. This script will build a project and then start a node server which outputs data to the terminal continually. This process needs to continue indefinitely so I can't have any exit codes.
Secondly, I need to wait for a specific line of output from the first script, namely 'Started your app.'.
Once that line has been output to the terminal, I need to launch a separate set of commands, either from another subscript or from an if or while block, which will change a few lines of code in the project that was built by the first script to resolve some dependencies for a later step.
So, how can I capture the output of the first subscript and use that to run another set of commands when a particular line is output to the terminal, all while allowing the first script to run in the terminal, and without using timers and without creating a huge file from the output of subscript1 as it will run indefinitely?
Pseudo-code:
#!/usr/bin/env bash
# This script needs to stay running & will output to the terminal (at some point)
# a string that we need to wait/watch for to launch subscript2
sh subscript1
# This can't run until subscript1 has output a particular string to the terminal
# This could be another script, or an if or while block
sh subscript2
I have been beating my head against my desk for hours trying to get this to work. Any help would be appreciated!
I think this is a bad idea — much better to have subscript1 changed to be automation-friendly — but in theory you can write:
sh subscript1 \
| {
while IFS= read -r line ; do
printf '%s\n' "$line"
if [[ "$line" = 'Started your app.' ]] ; then
sh subscript2 &
break
fi
done
cat
}
I'm looking at https://stackoverflow.com/a/10225050/1737158
And in same Q there is an answer with timeout command but it's not in all OSes, so I want to avoid it.
What I try to do is:
demo="$(top)" &
TASK_PID=$!
sleep 3
echo "TASK_PID: $TASK_PID"
echo "demo: $demo"
And I expect to have nothing in $demo variable while top command never ends.
Now I get an empty result. Which is "acceptable" but when i re-use the same thing with the command which should return value, I still get an empty result, which is not ok. E.g.:
demo="$(uptime)" &
TASK_PID=$!
sleep 3
echo "TASK_PID: $TASK_PID"
echo "demo: $demo"
This should return uptime result but it doesn't. I also tried to kill the process by TASK_PID but I always get. If a command fails, I expect to have stderr captures somehow. It can be in different variable but it has to be captured and not leaked out.
What happens when you execute var=$(cmd) &
Let's start by noting that the simple command in bash has the form:
[variable assignments] [command] [redirections]
for example
$ demo=$(echo 313) declare -p demo
declare -x demo="313"
According to the manual:
[..] the text after the = in each variable assignment undergoes tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal before being assigned to the variable.
Also, after the [command] above is expanded, the first word is taken to be the name of the command, but:
If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.
So, as expected, when demo=$(cmd) is run, the result of $(..) command substitution is assigned to the demo variable in the current shell.
Another point to note is related to the background operator &. It operates on the so called lists, which are sequences of one or more pipelines. Also:
If a command is terminated by the control operator &, the shell executes the command asynchronously in a subshell. This is known as executing the command in the background.
Finally, when you say:
$ demo=$(top) &
# ^^^^^^^^^^^ simple command, consisting ONLY of variable assignment
that simple command is executed in a subshell (call it s1), inside which $(top) is executed in another subshell (call it s2), the result of this command substitution is assigned to variable demo inside the shell s1. Since no commands are given, after variable assignment, s1 terminates, but the parent shell never receives the variables set in child (s1).
Communicating with a background process
If you're looking for a reliable way to communicate with the process run asynchronously, you might consider coprocesses in bash, or named pipes (FIFO) in other POSIX environments.
Coprocess setup is simpler, since coproc will setup pipes for you, but note you might not reliably read them if process is terminated before writing any output.
#!/bin/bash
coproc top -b -n3
cat <&${COPROC[0]}
FIFO setup would look something like this:
#!/bin/bash
# fifo setup/clean-up
tmp=$(mktemp -td)
mkfifo "$tmp/out"
trap 'rm -rf "$tmp"' EXIT
# bg job, terminates after 3s
top -b >"$tmp/out" -n3 &
# read the output
cat "$tmp/out"
but note, if a FIFO is opened in blocking mode, the writer won't be able to write to it until someone opens it for reading (and starts reading).
Killing after timeout
How you'll kill the background process depends on what setup you've used, but for a simple coproc case above:
#!/bin/bash
coproc top -b
sleep 3
kill -INT "$COPROC_PID"
cat <&${COPROC[0]}
Say I start with the following statement, which echo-s a string into the ether:
$ echo "foo" 1>/dev/null
I then submit the following pipeline:
$ echo "foo" | cat -e - 1>/dev/null
I then leave the process out:
$ echo "foo" | 1>/dev/null
Why is this not returning an error message? The documentation on bash and piping doesn't seem to make direct mention of may be the cause. Is there an EOF sent before the first read from echo (or whatever the process is, which is running upstream of the pipe)?
A shell simple command is not required to have a command name. For a command without a command-name:
variable assignments apply to the current execution environment. The following will set two variables to argument values:
arg1=$1 arg3=$3
redirections occur in a subshell, but the subshell doesn't do anything other than initialize the redirect. The following will truncate or create the indicated file (if you have appropriate permissions):
>/file/to/empty
However, a command must have at least one word. A completely empty command is a syntax error (which is why it is occasionally necessary to use :).
Answer summarized from Posix XCU§2.9.1
I'd like to write .sh script that runs several scripts in the same directory one-by-one without running them concurrently (e.x. while the first one is still executing, the second one doesn't start executing).
Could you tell me the command, that could be written in front of script's name that does the actual thing?
I've tried source but it gives the following message for every listed script
./outer_script.sh: source: not found
source is a non-standard extension introduced by bash. POSIX specifies that you must use the . command. Other than the name, they are identical.
However, you probably don't want to source, because that is only supposed to be used when you need the script to be able to change the state of the script calling it. It is like a #include or import statement in other languages.
You would usually want to just run the script directly as a command, i.e. do not prefix it with source nor with any other command.
As a quick example of not using source:
for script in scripts/*; do
"$script"
done
If the above does not work, ensure that you've set the executable bit (chmod a+x) on the necessary scripts.
That is normal behavior of the bash script. i.e. if you have three scripts:
script1.sh:
echo "starting"
./script2.sh
./script3.sh
echo "done"
script2.sh:
while [ 1 ]; do
echo "script2"
sleep 2
done
and script3.sh:
echo "script3"
The output is:
starting
script2
script2
script2
...
and script3.sh will never be executed, unless you modify script1.sh to be:
echo "starting"
./script2.sh &
./script3.sh &
echo "done"
in which case the output will be something like:
starting
done
script2
script3
script2
script2
...
So in this case I assume your second level scripts contain something that starts new processes.
Have you included the line #!bin/bash in your outer_script? Some OS's don't consider it to be bash by default and source is bash command. Else just call the scripts using ./path/to/script.sh inside the outer_script
Recently I wrote a script which sets an environment variable, take a look:
#!/bin/bash
echo "Pass a path:"
read path
echo $path
defaultPath=/home/$(whoami)/Desktop
if [ -n "$path" ]; then
export my_var=$path
else
echo "Path is empty! Exporting default path ..."
export my_var=$defaultPath
fi
echo "Exported path: $my_var"
It works just great but the problem is that my_var is available just locally, I mean in console window where I ran the script.
How to write a script which allow me to export global environment variable which can be seen everywhere?
Just run your shell script preceded by "." (dot space).
This causes the script to run the instructions in the original shell. Thus the variables still exist after the script finish
Ex:
cat setmyvar.sh
export myvar=exists
. ./setmyvar.sh
echo $myvar
exists
Each and every shell has its own environment. There's no Universal environment that will magically appear in all console windows. An environment variable created in one shell cannot be accessed in another shell.
It's even more restrictive. If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell.
If all of your shells need access to the same set of variables, you can create a startup file that will set them for you. This is done in BASH via the $HOME/.bash_profile file (or through $HOME/.profile if $HOME/.bash_profile doesn't exist) or through $HOME/.bashrc. Other shells have their own set of startup files. One is used for logins, and one is used for shells spawned without logins (and, as with bash, a third for non-interactive shells). See the manpage to learn exactly what startup scripts are used and what order they're executed).
You can try using shared memory, but I believe that only works while processes are running, so even if you figured out a way to set a piece of shared memory, it would go away as soon as that command is finished. (I've rarely used shared memory except for named pipes). Otherwise, there's really no way to set an environment variable in one shell and have another shell automatically pick it up. You can try using named pipes or writing that environment variable to a file for other shells to pick it up.
Imagine the problems that could happen if someone could change the environment of one shell without my knowledge.
Actually I found an way to achieve this (which in my case was to use a bash script to set a number of security credentials)
I just call bash from inside the script and the spawned shell now has the export values
export API_USERNAME=abc
export API_PASSWORD=bbbb
bash
now calling the file using ~/.app-x-setup.sh will give me an interactive shell with those environment values setup
The following were extracted from 2nd paragraph from David W.'s answer: "If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell."
In case a user need to let parent shell access your new environment variables, just issue the following command in parent shell:
source <your_subshell_script>
or using shortcut
. <your_subshell_script>
You got to add the variable in your .profile located in /home/$USER/.profile
Yo can do that with this command:
echo 'TEST="hi"' >> $HOME/.profile
Or by edit the file with emacs, for example.
If you want to set this variable for all users, you got to edit /etc/profile (root)
There is no global environment, really, in UNIX.
Each process has an environment, originally inherited from the parent, but it is local to the process after the initial creation.
You can only modify your own, unless you go digging around in the process using a debugger.
write it to a temporary file, lets say ~/.myglobalvar and read it from anywhere
echo "$myglobal" > ~/.myglobalvar
Environment variables are always "local" to process execution the export command allow to set environment variables for sub processes. You can look at .bashrc to set environment variables at the start of a bash shell. What you are trying to do seems not possible as a process cannot modify (or access ?) to environment variables of another process.
You can update the ~/.bashrc or ~/.bash_profile file which is used to initialize the environment.
Take a look at the loading behavior of your shell (explained in the manpage, usually referring to .XXXshrc or .profile). Some configuration files are loaded at login time of an interactive shell, some are loaded each time you run a shell. Placing your variable in the latter might result in the behavior you want, e.g. always having the variable set using that distinct shell (for example bash).
If you need to dynamically set and reference environment variables in shell scripts, there is a work around. Judge for yourself whether is worth doing, but here it is.
The strategy involves having a 'set' script which dynamically writes a 'load' script, which has code to set and export an environment variable. The 'load' script is then executed periodically by other scripts which need to reference the variable. BTW, the same strategy could be done by writing and reading a file instead of a variable.
Here's a quick example...
Set_Load_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
echo "#!/bin/bash" > $PROCESSING_SIGNAL_SCRIPT
echo "export PROCESSING_SIGNAL=$1" >> $PROCESSING_SIGNAL_SCRIPT
chmod ug+rwx $PROCESSING_SIGNAL_SCRIPT
Load_PROCESSING_SIGNAL.sh (this gets dynamically created when the above is run)
#!/bin/bash
export PROCESSING_SIGNAL=1
You can test this with
Test_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
N=1
LIM=100
while [ $N -le $LIM ]
do
# DO WHATEVER LOOP PROCESSING IS NEEDED
echo "N = $N"
sleep 5
N=$(( $N + 1 ))
# CHECK PROCESSING_SIGNAL
source $PROCESSING_SIGNAL_SCRIPT
if [[ $PROCESSING_SIGNAL -eq 0 ]]; then
# Write log info indicating that the signal to stop processing was detected
# Write out all relevent info
# Send an alert email of this too
# Then exit
echo "Detected PROCESSING_SIGNAL for all stop. Exiting..."
exit 1
fi
done
~/.bin/SOURCED/lazy script to save and load data as flat files for system.
[ ! -d ~/.megadata ] && mkdir ~/.megadata
function save_data {
[ -z "$1" -o -z "$2" ] && echo 'save_data [:id:] [:data:]' && return
local overwrite=${3-false}
[ "$overwrite" = 'true' ] && echo "$2" > ~/.megadata/$1 && return
[ ! -f ~/.megadata/$1 ] && echo "$2" > ~/.megadata/$1 || echo ID TAKEN set third param to true to overwrite
}
save_data computer engine
cat ~/.megadata/computer
save_data computer engine
save_data computer megaengine true
function get_data {
[ -z "$1" -o -f $1 ] && echo 'get_data [:id:]' && return
[ -f ~/.megadata/$1 ] && cat ~/.megadata/$1 || echo ID NOT FOUND
:
}
get_data computer
get_data computer
Maybe a little off topic, but when you really need it to set it temporarily to execute some script and ended up here looking for answers:
If you need to run a script with certain environment variables that you don't need to keep after execution you could do something like this:
#!/usr/bin/env sh
export XDEBUG_SESSION=$(hostname);echo "running with xdebug: $XDEBUG_SESSION";$#
In my example I just use XDEBUG_SESSION with a hostname, but you can use multiple variables. Keep them separated with a semi-colon. Execution as follows (assuming you called the script debug.sh and placed it in the same directory as your php script):
$ debug.sh php yourscript.php