cap production deploy fails on composer:run install - composer-php

I'm trying to deploy a project to our production server, as I've done several times before, but it seems to be failing on the composer:run install step.
I run cap production deploy:check and everything is great.
I then run cap production deploy --trace, which returns the following error when it reaches the composer:run step:
>** Invoke deploy:updated (first_time)
>** Invoke composer:install (first_time)
>** Execute composer:install
>** Invoke composer:run (first_time)
>** Execute composer:run
>00:13 composer:run
> 01 composer install --no-dev --prefer-dist --no-interaction --quiet --optimize-autoloader
> 01 stdin: is not a tty
>cap aborted!
>SSHKit::Runner::ExecuteError: Exception while executing as xxx#xxxx.xxx: >composer exit status: 1
>composer stdout: Nothing written
>composer stderr: stdin: is not a tty
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/runners/parallel.rb:15:in `rescue in block (2 levels) in >execute'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/runners/parallel.rb:11:in `block (2 levels) in execute'
>SSHKit::Command::Failed: composer exit status: 1
>composer stdout: Nothing written
>composer stderr: stdin: is not a tty
>/Library/Ruby/Gems/2.0.0/gems/sshkit-1.11.1/lib/sshkit/command.rb:100:in >`exit_status='
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/backends/netssh.rb:148:in `execute_command'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/backends/abstract.rb:141:in `block in >create_command_and_execute'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/backends/abstract.rb:141:in `tap'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/backends/abstract.rb:141:in `create_command_and_execute'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/backends/abstract.rb:74:in `execute'
>/Library/Ruby/Gems/2.0.0/gems/capistrano-composer->0.0.6/lib/capistrano/tasks/composer.rake:27:in `block (4 levels) in <top >(required)>'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/backends/abstract.rb:85:in `within'
>/Library/Ruby/Gems/2.0.0/gems/capistrano-composer->0.0.6/lib/capistrano/tasks/composer.rake:26:in `block (3 levels) in <top >(required)>'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/backends/abstract.rb:29:in `instance_exec'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/backends/abstract.rb:29:in `run'
>/Library/Ruby/Gems/2.0.0/gems/sshkit->1.11.1/lib/sshkit/runners/parallel.rb:12:in `block (2 levels) in execute'
>Tasks: TOP => composer:run
>The deploy has failed with an error: Exception while executing as >xxx#xxxx.xxx: composer exit status: 1
>composer stdout: Nothing written
>composer stderr: stdin: is not a tty
>** Invoke deploy:failed (first_time)
>** Execute deploy:failed
I've tried looking up this composer exit status: 1 and it seems to just be a generic error.
I've ssh'd to the production server, and it seems cap is successfully creating a release folder, and pulling the files down from the repo. The error seems to occur when it's installing dependencies.
I'm at a bit of a lose on how to resolve this. Most other threads I've found seemed to have some more informative error messages.
Capistrano log below:
INFO ---------------------------------------------------------------------------
INFO START 2016-09-30 09:25:00 -0230 cap production deploy
INFO ---------------------------------------------------------------------------
INFO [f3a390e9] Running /usr/bin/env mkdir -p /home/USER/capistrano_tmp as USER#xxxx.xx
DEBUG [f3a390e9] Command: ( export WP_ENV="staging" ; /usr/bin/env mkdir -p /home/USER/capistrano_tmp )
INFO [f3a390e9] Finished in 2.175 seconds with exit status 0 (successful).
DEBUG Uploading /home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh 0.0%
INFO Uploading /home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh 100.0%
INFO [523fe25b] Running /usr/bin/env chmod 700 /home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh as USER#xxxx.xx
DEBUG [523fe25b] Command: ( export WP_ENV="staging" ; /usr/bin/env chmod 700 /home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh )
INFO [523fe25b] Finished in 0.125 seconds with exit status 0 (successful).
INFO [257d5c05] Running /usr/bin/env git ls-remote --heads $GIT_REPO as USER#xxxx.xx
DEBUG [257d5c05] Command: ( export WP_ENV="staging" GIT_ASKPASS="/bin/echo" GIT_SSH="/home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh" ; /usr/bin/env git ls-remote --heads $GIT_REPO )
DEBUG [257d5c05] a9bd8b177eb5287fa6b968c2a92207c1c25e8bf4 refs/heads/master
INFO [257d5c05] Finished in 6.650 seconds with exit status 0 (successful).
INFO [63f0ffa5] Running /usr/bin/env mkdir -p /home/USER/public_html/shared /home/USER/public_html/releases as USER#xxxx.xx
DEBUG [63f0ffa5] Command: ( export WP_ENV="staging" ; /usr/bin/env mkdir -p /home/USER/public_html/shared /home/USER/public_html/releases )
INFO [63f0ffa5] Finished in 0.124 seconds with exit status 0 (successful).
INFO [f7a9eb45] Running /usr/bin/env mkdir -p /home/USER/public_html/shared/web/app/uploads as USER#xxxx.xx
DEBUG [f7a9eb45] Command: ( export WP_ENV="staging" ; /usr/bin/env mkdir -p /home/USER/public_html/shared/web/app/uploads )
INFO [f7a9eb45] Finished in 0.123 seconds with exit status 0 (successful).
INFO [ce8a26f9] Running /usr/bin/env mkdir -p /home/USER/public_html/shared as USER#xxxx.xx
DEBUG [ce8a26f9] Command: ( export WP_ENV="staging" ; /usr/bin/env mkdir -p /home/USER/public_html/shared )
INFO [ce8a26f9] Finished in 0.123 seconds with exit status 0 (successful).
DEBUG [0f2919f9] Running [ -f /home/USER/public_html/shared/.env ] as USER#xxxx.xx
DEBUG [0f2919f9] Command: [ -f /home/USER/public_html/shared/.env ]
DEBUG [0f2919f9] Finished in 0.123 seconds with exit status 0 (successful).
DEBUG [0a95dd46] Running [ -f /home/USER/public_html/current/REVISION ] as USER#xxxx.xx
DEBUG [0a95dd46] Command: [ -f /home/USER/public_html/current/REVISION ]
DEBUG [0a95dd46] Finished in 0.123 seconds with exit status 1 (failed).
DEBUG [ca7de6e3] Running [ -f /home/USER/public_html/repo/HEAD ] as USER#xxxx.xx
DEBUG [ca7de6e3] Command: [ -f /home/USER/public_html/repo/HEAD ]
DEBUG [ca7de6e3] Finished in 0.122 seconds with exit status 0 (successful).
INFO The repository mirror is at /home/USER/public_html/repo
DEBUG [5efe620a] Running if test ! -d /home/USER/public_html/repo; then echo "Directory does not exist '/home/USER/public_html/repo'" 1>&2; false; fi as USER#xxxx.xx
DEBUG [5efe620a] Command: if test ! -d /home/USER/public_html/repo; then echo "Directory does not exist '/home/USER/public_html/repo'" 1>&2; false; fi
DEBUG [5efe620a] Finished in 0.123 seconds with exit status 0 (successful).
INFO [b0a1ad08] Running /usr/bin/env git remote update --prune as USER#xxxx.xx
DEBUG [b0a1ad08] Command: cd /home/USER/public_html/repo && ( export WP_ENV="staging" GIT_ASKPASS="/bin/echo" GIT_SSH="/home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh" ; /usr/bin/env git remote update --prune )
DEBUG [b0a1ad08] Fetching origin
INFO [b0a1ad08] Finished in 5.040 seconds with exit status 0 (successful).
DEBUG [6d7e14fa] Running if test ! -d /home/USER/public_html/repo; then echo "Directory does not exist '/home/USER/public_html/repo'" 1>&2; false; fi as USER#xxxx.xx
DEBUG [6d7e14fa] Command: if test ! -d /home/USER/public_html/repo; then echo "Directory does not exist '/home/USER/public_html/repo'" 1>&2; false; fi
DEBUG [6d7e14fa] Finished in 0.123 seconds with exit status 0 (successful).
INFO [0c8f006e] Running /usr/bin/env mkdir -p /home/USER/public_html/releases/20160930115512 as USER#xxxx.xx
DEBUG [0c8f006e] Command: cd /home/USER/public_html/repo && ( export WP_ENV="staging" GIT_ASKPASS="/bin/echo" GIT_SSH="/home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh" ; /usr/bin/env mkdir -p /home/USER/public_html/releases/20160930115512 )
INFO [0c8f006e] Finished in 0.125 seconds with exit status 0 (successful).
INFO [a31715de] Running /usr/bin/env git archive master | tar -x -f - -C /home/USER/public_html/releases/20160930115512 as USER#xxxx.xx
DEBUG [a31715de] Command: cd /home/USER/public_html/repo && ( export WP_ENV="staging" GIT_ASKPASS="/bin/echo" GIT_SSH="/home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh" ; /usr/bin/env git archive master | tar -x -f - -C /home/USER/public_html/releases/20160930115512 )
INFO [a31715de] Finished in 0.158 seconds with exit status 0 (successful).
DEBUG [e57ef477] Running if test ! -d /home/USER/public_html/repo; then echo "Directory does not exist '/home/USER/public_html/repo'" 1>&2; false; fi as USER#xxxx.xx
DEBUG [e57ef477] Command: if test ! -d /home/USER/public_html/repo; then echo "Directory does not exist '/home/USER/public_html/repo'" 1>&2; false; fi
DEBUG [e57ef477] Finished in 0.124 seconds with exit status 0 (successful).
DEBUG [cea943e0] Running /usr/bin/env git rev-list --max-count=1 master as USER#xxxx.xx
DEBUG [cea943e0] Command: cd /home/USER/public_html/repo && ( export WP_ENV="staging" GIT_ASKPASS="/bin/echo" GIT_SSH="/home/USER/capistrano_tmp/git-ssh-banl-production-MK.sh" ; /usr/bin/env git rev-list --max-count=1 master )
DEBUG [cea943e0] a9bd8b177eb5287fa6b968c2a92207c1c25e8bf4
DEBUG [cea943e0] Finished in 0.126 seconds with exit status 0 (successful).
DEBUG [1759b360] Running if test ! -d /home/USER/public_html/releases/20160930115512; then echo "Directory does not exist '/home/USER/public_html/releases/20160930115512'" 1>&2; false; fi as USER#xxxx.xx
DEBUG [1759b360] Command: if test ! -d /home/USER/public_html/releases/20160930115512; then echo "Directory does not exist '/home/USER/public_html/releases/20160930115512'" 1>&2; false; fi
DEBUG [1759b360] Finished in 0.122 seconds with exit status 0 (successful).
INFO [ae64693c] Running /usr/bin/env echo "a9bd8b177eb5287fa6b968c2a92207c1c25e8bf4" >> REVISION as USER#xxxx.xx
DEBUG [ae64693c] Command: cd /home/USER/public_html/releases/20160930115512 && ( export WP_ENV="staging" ; /usr/bin/env echo "a9bd8b177eb5287fa6b968c2a92207c1c25e8bf4" >> REVISION )
INFO [ae64693c] Finished in 0.132 seconds with exit status 0 (successful).
INFO [e9cce54e] Running /usr/bin/env mkdir -p /home/USER/public_html/releases/20160930115512 as USER#xxxx.xx
DEBUG [e9cce54e] Command: ( export WP_ENV="staging" ; /usr/bin/env mkdir -p /home/USER/public_html/releases/20160930115512 )
INFO [e9cce54e] Finished in 0.123 seconds with exit status 0 (successful).
DEBUG [db1a3471] Running [ -L /home/USER/public_html/releases/20160930115512/.env ] as USER#xxxx.xx
DEBUG [db1a3471] Command: [ -L /home/USER/public_html/releases/20160930115512/.env ]
DEBUG [db1a3471] Finished in 0.123 seconds with exit status 1 (failed).
DEBUG [848e8fe8] Running [ -f /home/USER/public_html/releases/20160930115512/.env ] as USER#xxxx.xx
DEBUG [848e8fe8] Command: [ -f /home/USER/public_html/releases/20160930115512/.env ]
DEBUG [848e8fe8] Finished in 0.121 seconds with exit status 1 (failed).
INFO [7d1f7edd] Running /usr/bin/env ln -s /home/USER/public_html/shared/.env /home/USER/public_html/releases/20160930115512/.env as USER#xxxx.xx
DEBUG [7d1f7edd] Command: ( export WP_ENV="staging" ; /usr/bin/env ln -s /home/USER/public_html/shared/.env /home/USER/public_html/releases/20160930115512/.env )
INFO [7d1f7edd] Finished in 0.122 seconds with exit status 0 (successful).
INFO [10b70c5d] Running /usr/bin/env mkdir -p /home/USER/public_html/releases/20160930115512/web/app as USER#xxxx.xx
DEBUG [10b70c5d] Command: ( export WP_ENV="staging" ; /usr/bin/env mkdir -p /home/USER/public_html/releases/20160930115512/web/app )
INFO [10b70c5d] Finished in 0.123 seconds with exit status 0 (successful).
DEBUG [880e8bc2] Running [ -L /home/USER/public_html/releases/20160930115512/web/app/uploads ] as USER#xxxx.xx
DEBUG [880e8bc2] Command: [ -L /home/USER/public_html/releases/20160930115512/web/app/uploads ]
DEBUG [880e8bc2] Finished in 0.123 seconds with exit status 1 (failed).
DEBUG [349e1b98] Running [ -d /home/USER/public_html/releases/20160930115512/web/app/uploads ] as USER#xxxx.xx
DEBUG [349e1b98] Command: [ -d /home/USER/public_html/releases/20160930115512/web/app/uploads ]
DEBUG [349e1b98] Finished in 0.121 seconds with exit status 0 (successful).
INFO [8045d8e0] Running /usr/bin/env rm -rf /home/USER/public_html/releases/20160930115512/web/app/uploads as USER#xxxx.xx
DEBUG [8045d8e0] Command: ( export WP_ENV="staging" ; /usr/bin/env rm -rf /home/USER/public_html/releases/20160930115512/web/app/uploads )
INFO [8045d8e0] Finished in 0.125 seconds with exit status 0 (successful).
INFO [07a69ec8] Running /usr/bin/env ln -s /home/USER/public_html/shared/web/app/uploads /home/USER/public_html/releases/20160930115512/web/app/uploads as USER#xxxx.xx
DEBUG [07a69ec8] Command: ( export WP_ENV="staging" ; /usr/bin/env ln -s /home/USER/public_html/shared/web/app/uploads /home/USER/public_html/releases/20160930115512/web/app/uploads )
INFO [07a69ec8] Finished in 0.124 seconds with exit status 0 (successful).
DEBUG [ca95f29d] Running if test ! -d /home/USER/public_html/releases/20160930115512; then echo "Directory does not exist '/home/USER/public_html/releases/20160930115512'" 1>&2; false; fi as USER#xxxx.xx
DEBUG [ca95f29d] Command: if test ! -d /home/USER/public_html/releases/20160930115512; then echo "Directory does not exist '/home/USER/public_html/releases/20160930115512'" 1>&2; false; fi
DEBUG [ca95f29d] Finished in 0.122 seconds with exit status 0 (successful).
INFO [ee024dad] Running /usr/bin/env composer install --no-dev --prefer-dist --no-interaction --quiet --optimize-autoloader as USER#xxxx.xx
DEBUG [ee024dad] Command: cd /home/USER/public_html/releases/20160930115512 && ( export WP_ENV="staging" ; /usr/bin/env composer install --no-dev --prefer-dist --no-interaction --quiet --optimize-autoloader )

The relevant error message is:
stdin: is not a tty
This probably means that composer is expecting interactive user input. I am not familiar with composer, but this seems to explain why a non-interactive Capistrano deploy fails but you are able to run the command via an interactive SSH session.
You could try adding this to your Capistrano deploy.rb:
set :pty, true
This tells Capistrano to use a pseudo terminal, which may be enough to make composer happy.

Related

Invalid threads definition: entries have to be defined as RULE=THREADS pairs (with THREADS being a positive integer). Unparseable value

Did you notice that set-threads do not work with a recent version of snakemake? It looks long but you just have to copy/paste. Here is a MRE:
mkdir snakemake-test && cd snakemake-test
touch snakeFile
mkdir profile && touch profile/config.yaml && touch profile/status-sacct.sh && chmod +x profile/status-sacct.sh
mkdir envs && touch envs/environment1.yaml && touch envs/environment2.yaml
In envs/environment1.yaml:
channels:
- bioconda
- conda-forge
dependencies:
- snakemake-minimal=7.3.8
- pandas=1.4.2
- peppy=0.31.2
- eido=0.1.4
In envs/environment2.yaml:
channels:
- bioconda
- conda-forge
dependencies:
- snakemake-minimal=6.15.1
- pandas=1.4.2
- peppy=0.31.2
- eido=0.1.4
In snakeFile:
onstart:
print("\t Creating jobs output subfolders...\n")
shell("mkdir -p jobs/downloadgenome")
GENOME = "mm39"
PREFIX = "Mus_musculus.GRCm39"
rule all:
input:
expand("data/fasta/{genome}/{prefix}.dna.chromosome.1.fa", genome=GENOME, prefix=PREFIX)
rule downloadgenome:
output:
"data/fasta/{genome}/{prefix}.dna.chromosome.1.fa"
params:
genomeLinks = "http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/Mus_musculus.GRCm39.dna.chromosome.1.fa.gz"
threads: 4
shell:
"""
wget {params.genomeLinks}
gunzip {wildcards.prefix}.dna.chromosome.1.fa.gz
mkdir -p data/fasta/{wildcards.genome}
mv {wildcards.prefix}.dna.chromosome.1.fa data/fasta/{wildcards.genome}
"""
In profile/config.yaml:
snakefile: snakeFile
latency-wait: 60
printshellcmds: True
max-jobs-per-second: 1
max-status-checks-per-second: 10
jobs: 400
jobname: "{rule}.{jobid}"
cluster: "sbatch --output=\"jobs/{rule}/slurm_%x_%j.out\" --error=\"jobs/{rule}/slurm_%x_%j.log\" --cpus-per-task={threads} --ntasks=1 --parsable" # --parsable added for handling the timeout exception
cluster-status: "./profile/status-sacct.sh" # Use to handle timeout exception, do not forget to chmod +x
set-threads:
- downloadgenome=2
In profile/status-sacct.sh:
#!/usr/bin/env bash
# Check status of Slurm job
jobid="$1"
if [[ "$jobid" == Submitted ]]
then
echo smk-simple-slurm: Invalid job ID: "$jobid" >&2
echo smk-simple-slurm: Did you remember to add the flag --parsable to your sbatch call? >&2
exit 1
fi
output=`sacct -j "$jobid" --format State --noheader | head -n 1 | awk '{print $1}'`
if [[ $output =~ ^(COMPLETED).* ]]
then
echo success
elif [[ $output =~ ^(RUNNING|PENDING|COMPLETING|CONFIGURING|SUSPENDED).* ]]
then
echo running
else
echo failed
fi
Now build the conda environments:
cd envs
conda env create -p ./smake --file environment1.yaml
conda env create -p ./smake2 --file environment2.yaml
cd ..
If you run the whole thing with smake2 (snakemake snakemake-minimal=6.15.1) it indeeds run the job with 2 CPUs:
conda activate envs/smake2
snakemake --profile profile/
conda deactivate
rm -r data
rm -r jobs
If you do the same thing with smake (snakemake-minimal=7.3.8), it will crash with the error: Invalid threads definition: entries have to be defined as RULE=THREADS pairs (with THREADS being a positive integer). Unparseable value: '{downloadgenome :'.
conda activate envs/smake
snakemake --profile profile/
more jobs/downloadgenome/*log
I tried many things without success to solve the problem...
This was indeed a bug and has been fixed in PR 1615.

start an application with linux network namespaces using a bash function [duplicate]

This question already has answers here:
How can I execute a bash function using sudo?
(10 answers)
Closed 4 years ago.
I have this script for exec an application and control any error on startup but need to perform a better control on this and use "network namespaces" to redirect this app on the netns with id "controlnet". With the last line the scripts goes ok, but Im redirecting to a blank screen, after exit from this I can see the application running but isn't initialized on "controlnet" namespaces.
If manually make the steps all is ok:
sudo ip netns exec controlnet sudo -u $USER -i
cd /home/app-folder/
./hlds_run -game cstrike -pidfile ogp_game_startup.pid +map de_dust +ip 1.2.3.4 +port 27015 +maxplayers 12
How to add this lines to the full bash?
Script used:
#!/bin/bash
function startServer(){
NUMSECONDS=`expr $(date +%s)`
until ./hlds_run -game cstrike -pidfile ogp_game_startup.pid +map de_dust +ip 1.2.3.4 +port 27015 +maxplayers 14 ; do
let DIFF=(`date +%s` - "$NUMSECONDS")
if [ "$DIFF" -gt 15 ]; then
NUMSECONDS=`expr $(date +%s)`
echo "Server './hlds_run -game cstrike -pidfile ogp_game_startup.pid +map de_dust +ip 1.2.3.4 +port 27015 +maxplayers 12 ' crashed with exit code $?. Respawning..." >&2
fi
sleep 3
done
let DIFF=(`date +%s` - "$NUMSECONDS")
if [ ! -e "SERVER_STOPPED" ] && [ "$DIFF" -gt 15 ]; then
startServer
fi
}
sudo ip netns exec controlnet sudo -u myuser -i && cd /home/ && startServer
The key issue here is that sudo -u myuser -i starts a new shell session. Further commands, like cd /home, aren't run until in the shell session; instead, they're run after the shell session.
Thus, you need to move startServer into the sudo command, instead of running it after the sudo command.
One way to do this is by passing the code that should be run under sudo via a heredoc:
#!/bin/bash
sudo ip netns exec controlnet sudo -u myuser bash -s <<'EOF'
startServer() {
local endTime startTime retval
while :; do
startTime=$SECONDS
./hlds_run -game cstrike -pidfile ogp_game_startup.pid +map de_dust +ip 1.2.3.4 +port 27015 +maxplayers 14; retval=$?
endTime=$SECONDS
if (( (endTime - startTime) > 15 )); then
echo "Server crashed with exit code $retval. Respawning..." >&2
else
echo "Server exited with status $retval after less than 15 seconds" >&2
echo " not attempting to respawn" >&2
return "$retval"
fi
sleep 3
done
}
cd /home/ || exit
startServer
EOF
What's important here is that we're no longer running sudo -i and expecting the rest of the script to be fed into the escalated shell implicitly; instead, we're running bash -s (which reads script text to run from stdin), and passing both the text of the startServer function and a command that invokes it within that stdin stream.

how to wait wget finished to get more resources

I am new to bash.
I want to wget some resources in parallel.
What is the problem in the following code:
for item in $list
do
if [ $i -le 10 ];then
wget -b $item
let "i++"
else
wait
i=1
fi
When I execute this shell. Error throwed:
fork: Resource temporarily unavailable
My question is how to use wget right way.
Edit:
My problem is there is about four thousands of url to download, if I let all these jobs work in parallel, fork: Resource temporarily unavailable will throw out. I don't know how to control the count in parallel.
Use jobs|grep to check background jobs:
#!/bin/bash
urls=('www.cnn.com' 'www.wikipedia.org') ## input data
for ((i=-1;++i<${#urls[#]};)); do
curl -L -s ${urls[$i]} >file-$i.html & ## background jobs
done
until [[ -z `jobs|grep -E -v 'Done|Terminated'` ]]; do
sleep 0.05; echo -n '.' ## do something while waiting
done
echo; ls -l file*\.html ## list downloaded files
Results:
............................
-rw-r--r-- 1 xxx xxx 155421 Jan 20 00:50 file-0.html
-rw-r--r-- 1 xxx xxx 74711 Jan 20 00:50 file-1.html
Another variance, tasks in simple parallel:
#!/bin/bash
urls=('www.yahoo.com' 'www.hotmail.com' 'stackoverflow.com')
_task1(){ ## task 1: download files
for ((i=-1;++i<${#urls[#]};)); do
curl -L -s ${urls[$i]} >file-$i.html &
done; wait
}
_task2(){ echo hello; } ## task 2: a fake task
_task3(){ echo hi; } ## task 3: a fake task
_task1 & _task2 & _task3 & ## run them in parallel
wait ## and wait for them
ls -l file*\.html ## list results of all tasks
echo done ## and do something
Results:
hello
hi
-rw-r--r-- 1 xxx xxx 320013 Jan 20 02:19 file-0.html
-rw-r--r-- 1 xxx xxx 3566 Jan 20 02:19 file-1.html
-rw-r--r-- 1 xxx xxx 253348 Jan 20 02:19 file-2.html
done
Example with limit how many downloads in parallel at a time (max=3):
#!/bin/bash
m=3 ## max jobs (downloads) at a time
t=4 ## retries for each download
_debug(){ ## list jobs to see (debug)
printf ":: jobs running: %s\n" "$(echo `jobs -p`)"
}
## sample input data
## is redirected to filehandle=3
exec 3<<-EOF
www.google.com google.html
www.hotmail.com hotmail.html
www.wikipedia.org wiki.html
www.cisco.com cisco.html
www.cnn.com cnn.html
www.yahoo.com yahoo.html
EOF
## read data from filehandle=3, line by line
while IFS=' ' read -u 3 -r u f || [[ -n "$f" ]]; do
[[ -z "$f" ]] && continue ## ignore empty input line
while [[ $(jobs -p|wc -l) -ge "$m" ]]; do ## while $m or more jobs in running
_debug ## then list jobs to see (debug)
wait -n ## and wait for some job(s) to finish
done
curl --retry $t -Ls "$u" >"$f" & ## download in background
printf "job %d: %s => %s\n" $! "$u" "$f" ## print job info to see (debug)
done
_debug; wait; ls -l *\.html ## see final results
Outputs:
job 22992: www.google.com => google.html
job 22996: www.hotmail.com => hotmail.html
job 23000: www.wikipedia.org => wiki.html
:: jobs running: 22992 22996 23000
job 23022: www.cisco.com => cisco.html
:: jobs running: 22996 23000 23022
job 23034: www.cnn.com => cnn.html
:: jobs running: 23000 23022 23034
job 23052: www.yahoo.com => yahoo.html
:: jobs running: 23000 23034 23052
-rw-r--r-- 1 xxx xxx 61473 Jan 21 01:15 cisco.html
-rw-r--r-- 1 xxx xxx 155055 Jan 21 01:15 cnn.html
-rw-r--r-- 1 xxx xxx 12514 Jan 21 01:15 google.html
-rw-r--r-- 1 xxx xxx 3566 Jan 21 01:15 hotmail.html
-rw-r--r-- 1 xxx xxx 74711 Jan 21 01:15 wiki.html
-rw-r--r-- 1 xxx xxx 319967 Jan 21 01:15 yahoo.html
After reading your updated question, I think it is much easier to use lftp, which can log and download (automatically follow-link + retry-download + continue-download); you'll never need to worry about job/fork resources because you run only a few lftp commands. Just plit your download list into some smaller lists, and lftp will download for you:
$ cat downthemall.sh
#!/bin/bash
## run: lftp -c 'help get'
## to know how to use lftp to download files
## with automatically retry+continue
p=() ## pid list
for l in *\.lst; do
lftp -f "$l" >/dev/null & ## run proccesses in parallel
p+=("--pid=$!") ## record pid
done
until [[ -f d.log ]]; do sleep 0.5; done ## wait for the log file
tail -f d.log ${p[#]} ## print results when downloading
Outputs:
$ cat 1.lst
set xfer:log true
set xfer:log-file d.log
get -c http://www.microsoft.com -o micro.html
get -c http://www.cisco.com -o cisco.html
get -c http://www.wikipedia.org -o wiki.html
$ cat 2.lst
set xfer:log true
set xfer:log-file d.log
get -c http://www.google.com -o google.html
get -c http://www.cnn.com -o cnn.html
get -c http://www.yahoo.com -o yahoo.html
$ cat 3.lst
set xfer:log true
set xfer:log-file d.log
get -c http://www.hp.com -o hp.html
get -c http://www.ibm.com -o ibm.html
get -c http://stackoverflow.com -o stack.html
$ rm *log *html;./downthemall.sh
2018-01-22 02:10:13 http://www.google.com.vn/?gfe_rd=cr&dcr=0&ei=leVkWqiOKfLs8AeBvqBA -> /tmp/1/google.html 0-12538 103.1 KiB/s
2018-01-22 02:10:13 http://edition.cnn.com/ -> /tmp/1/cnn.html 0-153601 362.6 KiB/s
2018-01-22 02:10:13 https://www.microsoft.com/vi-vn/ -> /tmp/1/micro.html 0-129791 204.0 KiB/s
2018-01-22 02:10:14 https://www.cisco.com/ -> /tmp/1/cisco.html 0-61473 328.0 KiB/s
2018-01-22 02:10:14 http://www8.hp.com/vn/en/home.html -> /tmp/1/hp.html 0-73136 92.2 KiB/s
2018-01-22 02:10:14 https://www.ibm.com/us-en/ -> /tmp/1/ibm.html 0-32700 131.4 KiB/s
2018-01-22 02:10:15 https://vn.yahoo.com/?p=us -> /tmp/1/yahoo.html 0-318657 208.4 KiB/s
2018-01-22 02:10:15 https://www.wikipedia.org/ -> /tmp/1/wiki.html 0-74711 60.7 KiB/s
2018-01-22 02:10:16 https://stackoverflow.com/ -> /tmp/1/stack.html 0-253033 180.8
With updated question, here is an updated answer.
Following script launches 10 (can be changed to any number) wget processes in the background and monitors them. Once one of the process finishes, it gets the next one in the list and tries to keep the same $maxn(10) process running in the background, until it runs out of the urls from the list($urlfile). There are inline comments to help understand.
$ cat wget.sh
#!/bin/bash
wget_bg()
{
> ./wget.pids # Start with empty pidfile
urlfile="$1"
maxn=$2
cnt=0;
while read -r url
do
if [ $cnt -lt $maxn ] && [ ! -z "$url" ]; then # Only maxn processes will run in the background
echo -n "wget $url ..."
wget "$url" &>/dev/null &
pidwget=$! # This gets the backgrounded pid
echo "$pidwget" >> ./wget.pids # fill pidfile
echo "pid[$pidwget]"
((cnt++));
fi
while [ $cnt -eq $maxn ] # Start monitoring as soon the maxn process hits
do
while read -r pids
do
if ps -p $pids > /dev/null; then # Check pid running
:
else
sed -i "/$pids/d" wget.pids # If not remove it from pidfile
((cnt--)); # decrement counter
fi
done < wget.pids
done
done < "$urlfile"
}
# This runs 10 wget processes at a time in the bg. Modify for more or less.
wget_bg ./test.txt 10
To run:
$ chmod u+x ./wget.sh
$ ./wget.sh
wget blah.com ...pid[13012]
wget whatever.com ...pid[13013]
wget thing.com ...pid[13014]
wget foo.com ...pid[13015]
wget bar.com ...pid[13016]
wget baz.com ...pid[13017]
wget steve.com ...pid[13018]
wget kendal.com ...pid[13019]
Add this in your if statement :
until wget -b $item do
printf '.'
sleep 2
done
The loop will wait process finished and print a "." every 2sec

Laravel 5 running queue

I have a setup where the email sending service is queued to redis driver on my laravel application.
But I need to run the following code on my local host php artisan queue:work --daemon where the queue will be executed.
How can I run the daemon once I pushed my code into the server? I am currently using AWS Elastic Beanstalk
Thanks!!
Thanks #davidlee for posting comment of this question... :)
Finally I found a solution for running queue in elasticbeanstalk amazon. I'm using supervisord. I put the file on my laravel root as supervise.sh. Content of supervise.sh is like this :
#!/bin/bash
#
#
# Author: Günter Grodotzki (gunter#grodotzki.co.za)
# Version: 2015-04-25
#
# install supervisord
#
# See:
# - https://github.com/Supervisor/initscripts
# - http://supervisord.org/
if [ "${SUPERVISE}" == "enable" ]; then
export HOME="/root"
export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"
easy_install supervisor
cat <<'EOB' > /etc/init.d/supervisord
#!/bin/bash
#
# supervisord Startup script for the Supervisor process control system
#
# Author: Mike McGrath <mmcgrath#redhat.com> (based off yumupdatesd)
# Jason Koppe <jkoppe#indeed.com> adjusted to read sysconfig,
# use supervisord tools to start/stop, conditionally wait
# for child processes to shutdown, and startup later
# Erwan Queffelec <erwan.queffelec#gmail.com>
# make script LSB-compliant
#
# chkconfig: 345 83 04
# description: Supervisor is a client/server system that allows \
# its users to monitor and control a number of processes on \
# UNIX-like operating systems.
# processname: supervisord
# config: /etc/supervisord.conf
# config: /etc/sysconfig/supervisord
# pidfile: /var/run/supervisord.pid
#
### BEGIN INIT INFO
# Provides: supervisord
# Required-Start: $all
# Required-Stop: $all
# Short-Description: start and stop Supervisor process control system
# Description: Supervisor is a client/server system that allows
# its users to monitor and control a number of processes on
# UNIX-like operating systems.
### END INIT INFO
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=${SUPERVISORCTL-/usr/bin/supervisorctl}
supervisord=${SUPERVISORD-/usr/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/var/run/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
EOB
chmod +x /etc/init.d/supervisord
cat <<'EOB' > /etc/sysconfig/supervisord
# Configuration file for the supervisord service
#
# Author: Jason Koppe <jkoppe#indeed.com>
# orginal work
# Erwan Queffelec <erwan.queffelec#gmail.com>
# adjusted to new LSB-compliant init script
# make sure elasticbeanstalk PARAMS are being passed through to supervisord
. /opt/elasticbeanstalk/support/envvars
# WARNING: change these wisely! for instance, adding -d, --nodaemon
# here will lead to a very undesirable (blocking) behavior
#OPTIONS="-c /etc/supervisord.conf"
PIDFILE=/var/run/supervisord/supervisord.pid
#LOCKFILE=/var/lock/subsys/supervisord.pid
# Path to the supervisord binary
SUPERVISORD=/usr/local/bin/supervisord
# Path to the supervisorctl binary
SUPERVISORCTL=/usr/local/bin/supervisorctl
# How long should we wait before forcefully killing the supervisord process ?
#STOP_TIMEOUT=60
# Remove this if you manage number of open files in some other fashion
#ulimit -n 96000
EOB
mkdir -p /var/run/supervisord/
chown webapp: /var/run/supervisord/
cat <<'EOB' > /etc/supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0777
[supervisord]
logfile=/var/app/support/logs/supervisord.log
logfile_maxbytes=0
logfile_backups=0
loglevel=warn
pidfile=/var/run/supervisord/supervisord.pid
nodaemon=false
nocleanup=true
user=webapp
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:laravel_queue]
command=php artisan queue:listen
directory=/var/www/html
stdout_logfile=/var/www/html/storage/logs/laravel-queue.log
logfile_maxbytes=0
logfile_backups=0
redirect_stderr=true
autostart=true
autorestart=true
startretries=86400
EOB
# this is now a little tricky, not officially documented, so might break but it is the cleanest solution
# first before the "flip" is done (e.g. switch between ondeck vs current) lets stop supervisord
echo -e '#!/usr/bin/env bash\nservice supervisord stop' > /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
# then right after the webserver is reloaded, we can start supervisord again
echo -e '#!/usr/bin/env bash\nservice supervisord start' > /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
fi
Honestly... I don't really understand the meaning of above code ahaha... I just copied it from someone's blog... :D
And then we need to a new supervise.config inside .ebextensions folder like this:
packages:
yum:
python27-setuptools: []
files:
"/usr/bin/supervise.sh" :
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
#
# Author: Günter Grodotzki (gunter#grodotzki.co.za)
# Version: 2015-04-25
#
# install supervisord
#
# See:
# - https://github.com/Supervisor/initscripts
# - http://supervisord.org/
if [ "${SUPERVISE}" == "enable" ]; then
export HOME="/root"
export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"
easy_install supervisor
cat <<'EOB' > /etc/init.d/supervisord
#!/bin/bash
#
# supervisord Startup script for the Supervisor process control system
#
# Author: Mike McGrath <mmcgrath#redhat.com> (based off yumupdatesd)
# Jason Koppe <jkoppe#indeed.com> adjusted to read sysconfig,
# use supervisord tools to start/stop, conditionally wait
# for child processes to shutdown, and startup later
# Erwan Queffelec <erwan.queffelec#gmail.com>
# make script LSB-compliant
#
# chkconfig: 345 83 04
# description: Supervisor is a client/server system that allows \
# its users to monitor and control a number of processes on \
# UNIX-like operating systems.
# processname: supervisord
# config: /etc/supervisord.conf
# config: /etc/sysconfig/supervisord
# pidfile: /var/run/supervisord.pid
#
### BEGIN INIT INFO
# Provides: supervisord
# Required-Start: $all
# Required-Stop: $all
# Short-Description: start and stop Supervisor process control system
# Description: Supervisor is a client/server system that allows
# its users to monitor and control a number of processes on
# UNIX-like operating systems.
### END INIT INFO
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=${SUPERVISORCTL-/usr/bin/supervisorctl}
supervisord=${SUPERVISORD-/usr/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/var/run/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
EOB
chmod +x /etc/init.d/supervisord
cat <<'EOB' > /etc/sysconfig/supervisord
# Configuration file for the supervisord service
#
# Author: Jason Koppe <jkoppe#indeed.com>
# orginal work
# Erwan Queffelec <erwan.queffelec#gmail.com>
# adjusted to new LSB-compliant init script
# make sure elasticbeanstalk PARAMS are being passed through to supervisord
. /opt/elasticbeanstalk/support/envvars
# WARNING: change these wisely! for instance, adding -d, --nodaemon
# here will lead to a very undesirable (blocking) behavior
#OPTIONS="-c /etc/supervisord.conf"
PIDFILE=/var/run/supervisord/supervisord.pid
#LOCKFILE=/var/lock/subsys/supervisord.pid
# Path to the supervisord binary
SUPERVISORD=/usr/local/bin/supervisord
# Path to the supervisorctl binary
SUPERVISORCTL=/usr/local/bin/supervisorctl
# How long should we wait before forcefully killing the supervisord process ?
#STOP_TIMEOUT=60
# Remove this if you manage number of open files in some other fashion
#ulimit -n 96000
EOB
mkdir -p /var/run/supervisord/
chown webapp: /var/run/supervisord/
cat <<'EOB' > /etc/supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0777
[supervisord]
logfile=/var/app/support/logs/supervisord.log
logfile_maxbytes=0
logfile_backups=0
loglevel=warn
pidfile=/var/run/supervisord/supervisord.pid
nodaemon=false
nocleanup=true
user=webapp
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:laravel_queue_general]
command=php /var/www/html/artisan queue:listen --timeout=600
directory=/var/www/html
stdout_logfile=/var/www/html/storage/logs/laravel-queue.log
logfile_maxbytes=0
logfile_backups=0
redirect_stderr=true
autostart=true
autorestart=true
startretries=86400
process_name=%(program_name)s_%(process_num)02d
numprocs=2
[program:laravel_queue_data]
command=php /var/www/html/artisan queue:listen --timeout=600 --queue=https://sqs.ap-southeast-1.amazonaws.com/333973004348/data-processing-queue
directory=/var/www/html
stdout_logfile=/var/www/html/storage/logs/laravel-queue.log
logfile_maxbytes=0
logfile_backups=0
redirect_stderr=true
autostart=true
autorestart=true
startretries=86400
process_name=%(program_name)s_%(process_num)02d
numprocs=30
[program:laravel_queue_notif]
command=php /var/www/html/artisan queue:listen --timeout=600 --queue=https://sqs.ap-southeast-1.amazonaws.com/333973004348/notifications-queue
directory=/var/www/html
stdout_logfile=/var/www/html/storage/logs/laravel-queue.log
logfile_maxbytes=0
logfile_backups=0
redirect_stderr=true
autostart=true
autorestart=true
startretries=86400
process_name=%(program_name)s_%(process_num)02d
numprocs=2
EOB
# this is now a little tricky, not officially documented, so might break but it is the cleanest solution
# first before the "flip" is done (e.g. switch between ondeck vs current) lets stop supervisord
echo -e '#!/usr/bin/env bash\nservice supervisord stop' > /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
# then right after the webserver is reloaded, we can start supervisord again
echo -e '#!/usr/bin/env bash\nservice supervisord start' > /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
fi
Sometime's the queue failed to start. So we need to run it manually by login to ec2 instance of amazon using putty or mobaextrem (this is my favorite ssh terminal). And then after login, we just need to execute this command :
sudo -i
cd /usr/bin
./supervise.sh
/opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
./99_z_start_supervisord.sh
yaaapsss... that's all... :)
Note :
for checking whether the queue is running or not, we can use these :
ps ax|grep supervise
ps aux|grep sord

Bash script for starting jenkins node don't start as linux service in CentOS when it's boot up

i was written bash script which will start slave.jar process and vm will appear in jenkins. I have to start this script as service when linux boot up. I place my file in etc/init.d, with chmod +x and after that make chckconfig on it and all links are appears in rc.d folders, output of chkconfig shows:
jenkins-slave 0:off 1:off 2:on 3:on 4:on 5:on 6:off
When i make reboot nothing happend, when i run via sudo service jenkins-slave start everything is ok. All propertis contains in another file, everything is working when make it by hands in open session. How to make it auto executable when CentOs 6 up?
my script:
#!/bin/sh
#
# jenkins-slave: Launch a Jenkins BuildSlave instance on this node
#
# chkconfig: - 99 01
# description: Enable this node to fulfill build jobs
#
# Source function library.
. /etc/rc.d/init.d/functions
[ -f /etc/sysconfig/jenkins-slave ] && . /etc/sysconfig/jenkins-slave
[ -n "$JENKINS_URL" ] || exit 0
[ -n "$JENKINS_WORKDIR" ] || exit 0
[ -n "$JENKINS_USER" ] || exit 0
[ -n "$JENKINS_NODENAME" ] || exit 0
[ -x /usr/bin/java ] || exit 0
download_jar()
{
curl -s -o slave.jar $JENKINS_URL/jnlpJars/slave.jar || exit 0
}
start()
{
cd $JENKINS_WORKDIR
[ -f slave.jar ] || download_jar
echo -n $"Starting Jenkins BuildSlave: "
su - $JENKINS_USER sh -c "\
java -jar slave.jar \
-jnlpUrl $JENKINS_URL/computer/$JENKINS_NODENAME/slave-agent.jnlp \
>slave.log 2>&1 &"
echo Done.
}
stop()
{
echo -n $"Shutting down Jenkins BuildSlave: "
killproc slave.jar
echo Done.
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
restart|reload)
stop
start
;;
status)
status java
;;
*)
echo $"Usage: $0 {start|stop|restart|reload}"
exit 1
esac
exit 0

Resources