I'm trying to run a script that is required to have an exit code of 0. Unfortunalty I cannot use an init.d or other startup script to control this this, so I must make this work.
Basically if I understand AWS's docs correctly (elastic beanstalk), I need be able to run the following two commands and exit with a 0 and provide no other output to stdout.
As the root user I need to cd to a particular dir and run these two commands:
pkill -f que
bundle exec que
In my actually script I have:
#!/usr/bin/env bash
su -s /bin/bash -c "cd /some/dir && nohup pkill -f que &>/dev/null &"
sleep 10
su -s /bin/bash -c "cd /some/dir && nohup bundle exec que &"
Which still causes this error to be raised:
returned non-zero exit status 1 (Executor::NonZeroExitStatus)
Any tips for how to silently run those commands correctly?
I'm also looking at these for ideas:
https://blog.eq8.eu/article/aws-elasticbeanstalk-hooks.html
http://www.dannemanne.com/posts/post-deployment_script_on_elastic_beanstalk_restart_delayed_job
But its still not clear to me how this is supposed to exit successfully
Perhaps I'm missing something, but wouldn't this be easily solved by using two shell scripts? One with cd, pkill, and bundle. Call this script (foo.sh) something like:
#!/usr/bin/env bash
su -c ./foo.sh > /dev/null 2>&1 < /dev/null
exit 0
Related
I want to run this command inside a docker container ( ubuntu:18.04 image ):
(cd inse/; sh start.sh > log.txt 2>&1 ;) &
but when I run it, it does not log it to log.txt. When I run it this way:
(cd inse/; sh start.sh > log.txt 2>&1 ;)
It locks the forground (as it should do) and when I kill it I see that the log.txt file is filled with log stuff, which means It works correctly.
Why is this behaviour happening?
The contents of start.sh is:
#!/usr/bin/env sh
. venv/bin/activate;
python3 main.py;
UPDATE:
Actually this command is not the entry point of container and I run it inside another shell but inside a long running container (testing container).
Using with nohup, no success:
(cd inse/; nohup sh start.sh | tee log.txt;) &
I think this problem refers to using () the subshell concept inside sh. It seems it does not let output go anywhere when ran in background.
UPDATE 2:
Even this does not work:
sh -c "cd inse/; sh start.sh > log.txt 2>&1 &"
UPDATE 3:
Not even this:
sh -c "cd inse/; sh start.sh > log.txt 2>&1;" &
I found what was causing the problem.
It was buffered python output. This problem is caused by python.
I should have used python unbuffered output:
python -u blahblah
Try to use this command and please check that have full access to that folder where log.txt is created.use CMD/RUN step in Dockerfile to run start.sh.
CMD /inse/start.sh > log.txt 2>&1 ;
OR
RUN /inse/start.sh > log.txt 2>&1 ;
I am executing one shell script from another shell script. The included shell script is not terminating after execution. But when I run it separately, it works fine and terminates normally.
Script 1
#! /bin/bash
WebApp="R"
#----------Check for Web Application Status
localWebAppURL="http://localhost:8082/"
if curl --max-time 5 --output /dev/null --silent --head --fail "$localWebAppURL"; then
WebApp="G"
else
exec ./DownTimeCalc.sh &
fi
echo "WebApp Status|\"WebApp\":\"$WebApp\""
In above script I am calling another script called DownTimeCalc.sh.
DownTimeCalc.sh
#! /bin/bash
WebApp="R"
max=15
for (( i=1; i <= $max; ++i ))
do
if curl --max-time 5 --output /dev/null --silent --head --fail "http://localhost:8082/"; then
WebApp="G"
echo "Status|\"WebApp\":\"$WebApp\""
break
else
WebApp="R"
sleep 10
fi
echo "Status|\"WebApp\":\"$WebApp\""
done
echo "finished"
exit
exec ./DownTimeCalc.sh &
You don't need exec. If you want to run the script and wait for it to complete then just write:
./DownTimeCalc.sh
Or if you want to run it in the background and have the first script continue, write:
./DownTimeCalc.sh &
When you use & the launched process will be launched in the background and will run in the background while other commands from the foreground script run or you interact with the shell. It's doing what you told it. If you press Enter you will see any queued-up stderr output, and if you type fg it will bring the process to the foreground if it is still running.
You probably don't want to use & in this case, though.
I have WSL bash running in a cmd. I don't use it for anything, it just hangs there to keep the WSL system alive.
When I start X applications:
bash -c "DISPLAY=:0 xmessage hello &"
I get this result:
I can close down the command window without any problems, but it's rather annoying.
How can run commands without getting this cmd window every time?
Here's a simpler solution, which, however, requires a WSH-based helper script, runHidden.vbs (see bottom section):
wscript .\runHidden.vbs bash -c "DISPLAY=:0 xmessage 'hello, world'"
To apply #davv's own launch-in-background technique to avoid creating a new bash instance every time:
One-time action (e.g., at boot time): launch a hidden, stay-open bash window. This spawns 2 bash processes: the Windows bash.exe process that owns the console window, and the WSL bash process (owned by the WSL init singleton), which is then available for servicing background commands.
wscript .\runHidden.vbs bash # hidden helper instance for servicing background commands
For every X Window-launching command: Terminate each command with & to have it be run by the hidden WSL bash instance asynchronously, without keeping the invoking bash instance alive:
wscript .\runHidden.vbs bash -c "DISPLAY=:0 xmessage 'hello, world' &"
runHidden.vbs source code:
' Simple command-line help.
select case WScript.Arguments(0)
case "-?", "/?", "-h", "--help"
WScript.echo "Usage: runHidden executable [...]" & vbNewLine & vbNewLine & "Runs the specified command hidden (without a visible window)."
WScript.Quit(0)
end select
' Separate the arguments into the executable name
' and a single string containing all arguments.
exe = WScript.Arguments(0)
sep = ""
for i = 1 to WScript.Arguments.Count -1
' Enclose arguments in "..." to preserve their original partitioning, if necessary.
if Instr(WScript.Arguments(i), " ") > 0 then
args = args & sep & """" & WScript.Arguments(i) & """"
else
args = args & sep & WScript.Arguments(i)
end if
sep = " "
next
' Execute the command with its window *hidden* (0)
WScript.CreateObject("Shell.Application").ShellExecute exe, args, "", "open", 0
Even when launched from a GUI app (such as via the Run dialog invoked with Win+R), this will not show a console window.
If your system is configured to execute .vbs scripts with wscript.exe by default (wscript //h:wscript /s, which, I think, is the default configuration), you can invoke runHidden.vbs directly, and if you put it in your %PATH%, by filename (root) only: runHidden ....
Note that use of the script is not limited to console applications: even GUI applications can be run hidden with it.
There's another simple solution, it requires an external executable though. It has no dependencies and was recommended by aseering on GitHub.
you can launch bash via run.exe: run.exe bash.exe -c "<whatever Linux command>". (run.exe is available here: http://www.straightrunning.com/projectrun/ , make sure you download the 64-bit version, the 32-bit version will not be able to find or run bash).
With run on the search PATH, you can just call
run bash -c "DISPLAY=:0 xmessage hello"
So I just made this workaround for now. I really hope that there's a better way than this, but here it goes:
In the command prompt that lives purely to keep WSL alive, I have this script running:
wsl_run_server
#!/bin/bash
set -e
nc -kl 127.0.0.1 15150 | sh
And then I have this command to execute commands in background:
wsl_run_command
if ! pidof -x bin/wsl_run_server; then
echo wsl_run_server isnt running!
exit 1
fi
echo \($#\) \& | nc localhost 15150
from windows I then call:
bash -c "DISPLAY=:0 ~/bin/wsl_run_command xmessage hello"
There no longer is a need to have that command window pop up anymore with WSLg recently added to the mix. You just can call bash using WSLg, like so (I use Ubuntu currently in WSL):
wslg ~ -d Ubuntu bash
This will create a BASH session that will just sit there without being seen. Alternatively, you can do what I do and run a few services that stay running. I created a script that checks for running services, and if it doesn't find them running, will run them. Create the file in /usr/bin:
sudo touch /usr/bin/start-wsl-services
sudo nano /usr/bin/start-wsl-services
Past the following into the file:
#!/bin/bash
# Check for and run System-wide DBus service.
SERVICE="dbus-daemon"
if pgrep -x "$SERVICE" >/dev/null
then
pgrep -a "$SERVICE"
else
sudo /etc/init.d/dbus start
pgrep -a "$SERVICE"
fi
# Check for and run CUPS Printing Service.
SERVICE="cupsd"
if pgrep -x "$SERVICE" >/dev/null
then
pgrep -a "$SERVICE"
else
sudo /etc/init.d/cups start
pgrep -a "$SERVICE"
fi
# Check for and start Freshclam CLAMAV Update service.
SERVICE="freshclam"
if pgrep -x "$SERVICE" >/dev/null
then
pgrep -a "$SERVICE"
else
sudo /etc/init.d/clamav-freshclam start
pgrep -a "$SERVICE"
fi
# Check for and start SANED Scanner service.
SERVICE="saned"
if pgrep -x "$SERVICE" >/dev/null
then
pgrep -a "$SERVICE"
else
sudo /etc/init.d/saned start
pgrep -a "$SERVICE"
fi
# Check for and start screen-cleanup service.
SERVICE="screen-cleanup"
if pgrep -x "$SERVICE" >/dev/null
then
pgrep -a "$SERVICE"
else
sudo /etc/init.d/screen-cleanup start
pgrep -a "$SERVICE"
fi
# Check for and start Preload service.
SERVICE="preload"
if pgrep -x "$SERVICE" >/dev/null
then
pgrep -a "$SERVICE"
else
sudo /etc/init.d/preload start
pgrep -a "$SERVICE"
fi
# Prestart LibreOffice twice for faster loading.
#/usr/bin/libreoffice --terminate_after_init
#sleep 5
#/usr/bin/libreoffice --terminate_after_init
# Check for error, make sure all functions called and run, and pass the result on to calling process.
if [[ $? -ne 0 ]] ; then
exit 1
else
exit 0
fi
Save and exit the file, and then make it executable:
sudo chmod +x /usr/bin/start-wsl-services
I then call this using a shortcut that runs a startup script at startup. Or you can just run it manually. The command I use in the startup script is:
C:\Windows\System32\wslg.exe -d Ubuntu -- /usr/bin/start-wsl-services
The startup command script I use (named StartWSL.cmd) is as follows:
#echo off
echo Starting WSL Linux...
:RETRY
C:\Windows\System32\wslg.exe -d Ubuntu -- /usr/bin/start-wsl-services
REM - C:\Windows\System32\bash.exe -c '/usr/bin/start-wsl-services'
IF %ERRORLEVEL% NEQ 0 (GOTO RETRY)
REM - Allow time to see all results.
timeout /t 5 /nobreak >NUL
REM - Uncomment below line for troubleshooting.
REM - pause
exit 0
And that's how I now keep WSL running in the background on Windows 11, and similar to how I did it on Windows 10.
run command background
screen -dmS [name] [command]
example
screen -dmS gui bash -c "DISPLAY=:0 xmessage hello"
create a shortcut on windows desktop(run in wsl)
wslusc screen -dmS gui bash -c "DISPLAY=:0 xmessage hello"
I have a shell script when need to run as a particular user. So I call that script as below,
su - testuser -c "/root/check_package.sh | tee -a /var/log/check_package.log"
So after this when I check the last execution exitcode it returns always 0 only even if that script fails.
I tried something below also which didn't help,
su - testuser -c "/root/check_package.sh | tee -a /var/log/check_package.log && echo $? || echo $?"
Is there way to get the exitcode of command whatever running through su.
The problem here is not su, but tee: By default, the shell exits with the exit status of the last pipeline component; in your code, that component is not check_package.sh, but instead is tee.
If your /bin/sh is provided by bash (as opposed to ash, dash, or another POSIX-baseline shell), use set -o pipefail to cause the entirely pipeline to fail if any component of it does:
su - testuser -c "set -o pipefail; /root/check_package.sh | tee -a /var/log/check_package.log"
Alternately, you can do the tee out-of-band with redirection to a process substitution (though this requires your current user to have permission to write to check_package.log):
su - testuser -c "/root/check_package.sh" > >(tee -a /var/log/check_package.log
Both su and sudo exit with the exit status of the command they execute (if authentication succeeded):
$ sudo false; echo $?
1
$ su -c false; echo $?
1
Your problem is that the command pipeline that su runs is a pipeline. The exit status of your pipeline is that of the tee command (which succeeds), but what you really want is that of the first command in the pipeline.
If your shell is bash, you have a couple of options:
set -o pipefail before your pipeline, which will make it return the rightmost failure value of all the commands if any of them fail
Examine the specific member of the PIPESTATUS array variable - this can give you the exit status of the first command whether or not tee succeeds.
Examples:
$ sudo bash -c "false | tee -a /dev/null"; echo $?
0
$ sudo bash -c "set -o pipefail; false | tee -a /dev/null"; echo $?
1
$ sudo bash -c 'false | tee -a /dev/null; exit ${PIPESTATUS[0]}'; echo $?
1
You will get similar results using su -c, if your system shell (in /bin/sh) is Bash. If not, then you'd need to explicitly invoke bash, at which point sudo is clearly simpler.
I was facing a similar issue today, in case the topic is still open here my solution, otherwise just ignore it...
I wrote a bash script (let's say my_script.sh) which looks more or less like this:
### FUNCTIONS ###
<all functions listed in the main script which do what I want...>
### MAIN SCRIPT ### calls the functions defined in the section above
main_script() {
log_message "START" 0
check_env
check_data
create_package
tar_package
zip_package
log_message "END" 0
}
main_script |tee -a ${var_log} # executes script and writes info into log file
var_sta=${PIPESTATUS[0]} # captures status of pipeline
exit ${var_sta} # exits with value of status
It works when you call the script directly or in sudo mode
I have a script that uses ssh to login to a remote machine, cd to a particular directory, and then start a daemon. The original script looks like this:
ssh server "cd /tmp/path ; nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
This script appears to work fine. However, it is not robust to the case when the user enters the wrong path so the cd fails. Because of the ;, this command will try to run the nohup command even if the cd fails.
The obvious fix doesn't work:
ssh server "cd /tmp/path && nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
that is, the SSH command does not return until the server is stopped. Putting nohup in front of the cd instead of in front of the java didn't work.
Can anyone help me fix this? Can you explain why this solution doesn't work? Thanks!
Edit: cbuckley suggests using sh -c, from which I derived:
ssh server "nohup sh -c 'cd /tmp/path && java server 0</dev/null 1>master_stdout 2>master_stderr' 2>/dev/null 1>/dev/null &"
However, now the exit code is always 0 when the cd fails; whereas if I do ssh server cd /failed/path then I get a real exit code. Suggestions?
See Bash's Operator Precedence.
The & is being attached to the whole statement because it has a higher precedence than &&. You don't need ssh to verify this. Just run this in your shell:
$ sleep 100 && echo yay &
[1] 19934
If the & were only attached to the echo yay, then your shell would sleep for 100 seconds and then report the background job. However, the entire sleep 100 && echo yay is backgrounded and you're given the job notification immediately. Running jobs will show it hanging out:
$ sleep 100 && echo yay &
[1] 20124
$ jobs
[1]+ Running sleep 100 && echo yay &
You can use parenthesis to create a subshell around echo yay &, giving you what you'd expect:
sleep 100 && ( echo yay & )
This would be similar to using bash -c to run echo yay &:
sleep 100 && bash -c "echo yay &"
Tossing these into an ssh, and we get:
# using parenthesis...
$ ssh localhost "cd / && (nohup sleep 100 >/dev/null </dev/null &)"
$ ps -ef | grep sleep
me 20136 1 0 16:48 ? 00:00:00 sleep 100
# and using `bash -c`
$ ssh localhost "cd / && bash -c 'nohup sleep 100 >/dev/null </dev/null &'"
$ ps -ef | grep sleep
me 20145 1 0 16:48 ? 00:00:00 sleep 100
Applying this to your command, and we get
ssh server "cd /tmp/path && (nohup java server 0</dev/null 1>server_stdout 2>server_stderr &)"
or:
ssh server "cd /tmp/path && bash -c 'nohup java server 0</dev/null 1>server_stdout 2>server_stderr &'"
Also, with regard to your comment on the post,
Right, sh -c always returns 0. E.g., sh -c exit 1 has error code
0"
this is incorrect. Directly from the manpage:
Bash's exit status is the exit status of the last command executed in
the script. If no commands are executed, the exit status is 0.
Indeed:
$ bash -c "true ; exit 1"
$ echo $?
1
$ bash -c "false ; exit 22"
$ echo $?
22
ssh server "test -d /tmp/path" && ssh server "nohup ... &"
Answer roundup:
Bad: Using sh -c to wrap the entire nohup command doesn't work for my purposes because it doesn't return error codes. (#cbuckley)
Okay: ssh <server> <cmd1> && ssh <server> <cmd2> works but is much slower (#joachim-nilsson)
Good: Create a shell script on <server> that runs the commands in succession and returns the correct error code.
The last is what I ended up using. I'd still be interested in learning why the original use-case doesn't work, if someone who understands shell internals can explain it to me!