Envoyer Deployment with custom artisan command fails - laravel

I have a problem deploying my project with envoyer which executes an artisan-command I created.
The command gets all my users, performs another artisan command ($this->call('command')) and performs it actions by iterating through all the users.
The problems lies here:
foreach($usernames as $username) {
shell_exec('php ' . base_path('artisan') . ' command ' . $username . ' > /dev/null 2>/dev/null &');
}
This command starts a script in the background.
Its getting executed without any problems manually and doesn't end in a timeout (takes about 1s~ to execute) but in envoyer it wont stop running in the deploying step and fails in a timeout altough it executes flawless.
Additional informations:
For the reason why i'm running the script in the background:
The script im starting opens a socket which he will listen 24/7 until
the user cancel it manually.

I've just created a smaller example to make sure it's all working fine:
File forever.php will keep an infinite loop printing something every 5 seconds:
<?php
while (true) {
echo "I am still in the loop $argv[1]\n";
sleep(5);
}
File script.php will call multiple instances of forever.php and detach from the parent process (same as what you did):
<?php
for ($i = 0; $i < 5; $i++) {
shell_exec("php forever.php $i > /dev/null 2>/dev/null &");
}
When executing php script.php, obviously there's 5 instances of forever.php running. So your code seems fine (the part you've shown).
The only things I can think of in your case are:
1 second is too long for a script to run on Envoyer?
Your loop is a foreach on all your users. Could it be you're generating too many processes in that loop? How many users do you have?
Could you print your command before executing it, and try it directly on the terminal, to see whether it's all working fine?
Hope it helps, if you need more help please provide some more information.

Related

Bash file locking including flock for subprocesses

I am trying to work through securing my scripts from parallel execution by incorporating flock. I have read a number of threads here and came across a reference to this: http://www.kfirlavi.com/blog/2012/11/06/elegant-locking-of-bash-program/ which incorporates many of the examples presented in the other threads.
My scripts will eventually run on Ubuntu (>14), OS X 10.7 and 10.11.4. I am mainly testing on OS X 10.11.4 and have installed flock via homebrew.
When I run the script below, locks are being created but I think I am forking the subscripts and it is these scripts I am trying to ensure are not running more than one instance each.
#!/bin/bash
#----------------------------------------------------------------
set -vx
set -euo pipefail
set -o errexit
IFS=$'\n\t'
readonly PROGNAME=$(basename "$0")
readonly LOCKFILE_DIR=/tmp
readonly LOCK_FD=200
subprocess1="/bash$/subprocess1.sh"
subprocess2="/bash$/subprocess2.sh"
lock() {
local prefix=$1
local fd=${2:-$LOCK_FD}
local lock_file=$LOCKFILE_DIR/$prefix.lock
# create lock file
eval "exec $fd>$lock_file"
# acquier the lock
flock -n $fd \
&& return 0 \
|| return 1
}
eexit() {
local error_str="$#"
echo $error_str
exit 1
}
main() {
lock $PROGNAME \
|| eexit "Only one instance of $PROGNAME can run at one time."
##My child scripts
sh "$subprocess1" #wait for it to finish then run
sh "$subprocess2"
}
main
$subprocess1 is a script that loads ncftpget and logs into a remote server to grab some files. Once finished, the connection closes. I want to subprocess1 every 15 minutes via cron. I have done so with success, but sometimes there are many files to grab and the job takes longer than 15 minutes. It is rare, but it does happen. In such a case, I want to ensure a second instance of $subprocess1 can't be started. For clarity a small example of such a subscript is:
#!/bin/bash
remoteftp="someftp.ftp"
ncftplog="somelog.log"
localdir="some/local/dir"
ncftpget -R -T -f "$remoteftp" -d "$ncftplog" "$localdir" "*.files"
EXIT_V="$?"
case $EXIT_V in
0) O="Success!";;
1) O="Could not connect to remote host.";;
2) O="Could not connect to remote host - timed out.";;
3) O="Transfer failed.";;
4) O="Transfer failed - timed out.";;
5) O="Directory change failed.";;
6) O="Directory change failed - timed out.";;
7) O="Malformed URL.";;
8) O="Usage error.";;
9) O="Error in login configuration file.";;
10) O="Library initialization failed.";;
11) O="Session initialization failed.";;
esac
if [ "$EXIT_V" = 0 ];
then
echo ""$O"
else
echo "There has been an error: "$O""
echo "Exiting now..."
exit
fi
echo "Goodbye"
and an example of subprocess2 is:
#!/bin/bash
...preamble script setup items etc and then:
java -jar /some/javaprog.java
When I execute the parent script with "sh lock.sh", it progresses through the script without error and exits. The first issue I have is that if I load up the script again I get an error that indicates only one instance of lock.sh can run. What should I have added in the script that would indicate the processes have not completed yet (rather than merely exiting and giving back the prompt).
However, if subprocess1 was running on its own, lock.sh would load up a second instance of subprocess1 because it was not locked. How would one go about locking child scripts and ideally ensuring that forked processes were taken care of as well? If someone had run subprocess1 at the terminal or there was a runaway instance, if cron loads lock.sh, I would want it to fail when trying to load its instance subprocess1 and subprocess2 and not merely exit if cron tried to load two lock.sh instances.
My main concern is in loading multiple instances of ncftpget that is called by subprocess1 and then further, a third script I hope to incorporate, "subprocess2," which launches a java program that deals with the downloaded files, both ncftpget and the java program can't have parallel processes without breaking many things. But I'm at a loss on how to control them adequately.
I thought I could use something similar to this in the main() function of lock.sh:
#This is where I try to lock the subscript
pidfile="${subprocess1}"
# lock it
exec 200>$pidfile
flock -n 200 || exit 1
pid=$$
echo $pid 1>&200
but am not sure how to incorporate it.

Check processes run by cronjob to avoid multiple execution

How do I avoid cronjob from executing multiple times on the same command? I had tried to look around and try to check and kill in processes but it doesn't work with the below code. With the below code it keeps entering into else condition where it suppose to be "running". Any idea which part I did it wrongly?
#!/bin/sh
devPath=`ps aux | grep "[i]mport_shell_script"` | xargs
if [ ! -z "$devPath" -a "$devPath" != " " ]; then
echo "running"
exit
else
while true
do
sudo /usr/bin/php /var/www/html/xxx/import_from_datafile.php /dev/null 2>&1
sleep 5
done
fi
exit
cronjob:
*/2 * * * * root /bin/sh /var/www/html/xxx/import_shell_script.sh /dev/null 2>&1
I don't see the point to add a cron job which then starts a loop that runs a job. Either use cron to run the job every minute or use a daemon script to make sure your service is started and is kept running.
To check whether your script is already running, you can use a lock directory (unless your daemon framework already does that for you):
LOCK=/tmp/script.lock # You may want a better name here
mkdir $LOCK || exit 1 # Exit with error if script is already running
trap "rmdir $LOCK" EXIT # Remove the lock when the script terminates
...normal code...
If your OS supports it, then /var/lock/script might be a better path.
Your next question is probably how to write a daemon. To answer that, I need to know what kind of Linux you're using and whether you have things like systemd, daemonize, etc.
check the presence of a file at the beginning of your script ( for example /tmp/runonce-import_shell_script ). If it exists, that means the same script is already running (or the previous one halted with an error).
You can also add a timestamp in that file so you can check since when the script was running (and maybe decide to run it again after 24h even if the file is present)

capture exit code from a script flow

I need help with some scripts I'm writing.
Scenario:
Script A is executed by a scheduling process. This script takes the arguments passed to it, parses them in some way and runs script B feeding it with those arguments;
Script B does sudo -u user ssh user#REMOTEMACHINE, runs some commands (in the remote machine) and finally runs script C (also in the remote machine). I am passing those commands using a HERE DOCUMENT. Also, I'm passing the previous arguments to this script too.
This "flow" runs correctly and the job completes successfully.
My problems are:
Since this "flow" is ran by a scheduling process, I need to tell it if the job completed successfully or not. I'm doing this via exit codes, so what I want is to have a chain of exit codes, returning back from the last script to the first, in case of errors. I'm not able to perform this, because exit codes works correctly for the single scripts (I tried executing them singularly and look for the exit codes), but they are not sended back to the parent script. In my opinion, the problem is that ssh is getting the exit code from the child script, which in fact ended successfully, because there was no error executing it: it's the command inside of it that gone wrong.
While the process works correctly, I still get this line:
ssh: Could not resolve hostname : Name or service not known
But actually the script completes successfully.
I hope you understand what I wrote, I can eventually post my scripts here.
Thanks
O.
EDIT:
This are the scripts. There could be some problem with variable names because I renamed it quikly to upload the files.
Since I can't upload 3 files because of my low reputation, I merged them in a single file
SCRIPT FILE
I managed to solve the problem.
I followed olivier's advice and used the escape char to make the variable expanded by the remote machine.
Also I implemented different exit codes based on where the error occured.
At last, I modified the first script as follows, after launching sudo -u for the second script:
EXITCODEOFTHESECONDSCRIPT=$?
if [ $EXITCODEOFTHESECONDSCRIPT = 0 ]
then
echo ""
echo "Export job took $SECONDS seconds."
echo ""
exit 0
else
exit $EXITCODEOFTHESECONDSCRIPT
fi
This way I am able to exit the main script MAINTAINING the exit code provided from the second script.
In fact, I found that the problem was that the process worked well, even in case of errors, but the fact that I was giving more commands after the second script fail (the echo command was enough) provided other exit codes that overwrited the one I wanted.
Thanks to all !

shell /bash script - how do I execute multiple scripts at once using while

#!/bin/bash
for((i=1;i <=10;i++))
do
php /var/www/get.php
done
I have the above shell script that I'm using to execute get.php for 10 times . However it seems that the script(s) are executed one by one so I would like to know if it's possible to execute all of them at once (obviously without to type the php path command for 10 times )
If you want to run them all concurrently, you can change:
php /var/www/get.php
into:
php /var/www/get.php &
This runs the process in the background rather than the foreground. It's your responsibility to ensure that your script is functional when running in the background of course. You may have to watch out for resources that don't like being shared, or mingling of the outputs from the different processes.
If you also wanted to wait for all ten to finish as well, start with:
#!/bin/bash
for((i=1;i <=10;i++))
do
php /var/www/get.php &
done
wait
#!/bin/bash
for((i=1;i <=10;i++))
do
php /var/www/get.php &
done
Just add an ampersand.
Another variation :
seq 1 3 | while read i; do /usr/bin/php -v &; done;

checking if a streaming server is up using bash?

I use Ubuntu and am trying to write a script that makes the following:
-test if an audio stream works
-if not, send an email.
I have tried the following code (running as a cron job every 10 minutes), which 'works' if I supply the wrong pw e.g.(it sends an email then), but does nothing if the actual server is down (tested by killing the server). any ideas on how to fix the script?
Thanks in advance!
#!/bin/bash
#servertest.sh
username=user1
password=xyz
url="http://wwww.streamingaudioserver.com -passwd $password -user $username"
mplayer $url &
sleep 5
test=$(pgrep -c mplayer)
if [ $test = 0 ]; then
#server is down!
mailfile="downmail.txt"
/usr/sbin/ssmtp test#maildomain.com < "/home/test/$mailfile"
fi
killall mplayer
sleep 5
exit
Your problem is in this line:
$mailfile="downmail.txt"
remove the dollar sign and that should do it.
You should be getting error messages in your cron log or emails to the crontab owner complaining about a command not found or no such file.
Edit:
Does your script work if run from the command line (with the stream down) rather than cron?
Try using set -x (or #!/bin/bash -x) in the script to turn on tracing or use echo "PID: $$, value of \$test: $test" > /tmp/script.out after the assignment to see if you're getting the zero you're expecting.
Also, try an ssmtp command outside the if to make sure it's working (but I think you already said it is under some circumstances).
Try your script without ever starting mplayer.

Resources