I have this situation: I have a script in php running on ubuntu terminal (xfce4-terminal) as a console/process (in php there is a loop with some process).
The problem is: every two days this process is killed due to memory overuse.
What I need is: A bash script that can start the process and every 48hrs it kills this process and start it again.
The optimal solution is fixing the memory leak, trace the leaking function and post a new question with the relevant code if you need help.
Now for this specific case you can use something like this:
while true
do
timeout 12h php myfile.php
done
This is a infinite loop that starts your command and kills it afer 12 hours. (or any other duration you want: 30m, 1d, etc)
A more stable solution is creating a systemd service or deploying your script using some process manager like Supervisor or Monit.
Supervisor has a config parameter "autorestart", if you specify true it restarts your script every time it crashes, and this is a stable production ready solution.
A sample supervisor config from this post
[program:are_we_there_yet]
command=php /var/www/areWeThereYet.php
numprocs=1
directory=/tmp
autostart=true
autorestart=true
startsecs=5
startretries=10
redirect_stderr=false
Related
Can I stop an specific command runnig in the Laravel (7) schedule? I need to stop a command excecution, I was looking for any solution but I didn't find it...
This worked for me after I found a fault in my scheduled code and it was going to keep running for a really long time. SSH into the server.
ps -fe | grep artisan
then kill PID (PID being the number of the process). Killing the first two results worked for me.
Note: If you are using withoutOverlapping() the process will not start up again unless you change the ->name() to something unique.
I'm fairly new to Bash, redis and linux in general and I'm having trouble with creating a script. This is also my first question, I hope it is not a duplicate.
So here's the problem, I'm creating a simple application in ruby for educational purposes, but the feature I'm trying to implement uses redis and sidekiq. What I want to do is to create an executable script (I named it server) that initiates the redis server, initiates the redis, but it should also shutdown redis after the user finalizes the sidekiq.
This is what I came up with:
#!/usr/bin/env sh
set -e
redis-server --daemonize yes
bundle exec sidekiq -r ./a/sample/path/worker.rb
redis-cli shutdown # this is not working, I want to execute this after shutting sidekiq down...
When I run the fourth line, it starts the little Sidekiq "welcome page" and I can't to anything until I shut it down with Control + C. I assumed that after shutting it with this command, it would continue with the script I wrote, which would be the redis-cli shutdown command.
But it does not. When I Control + C the sidekiq, it simply goes back to the command line.
Is there anyone familiar with these concepts that could help me? I wanted a script that would also shutdown redis after I'm done with sidekiq.
Thanks!
Have you considered using Foreman?
http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
https://github.com/ddollar/foreman
I am new at using Cron Jobs on Google Cloud: I was wondering if it is possible to launch a job on an instance and have it run continuously without interruption even after I shut down my local (Laptop). Is it possible to have a job running without any ssh connection?
The CronJobs are a possibility, but they are not meant to be used in your scenario, but when you want to run a command with a certain frequency over the time.
A Bash Builtin command that suits better your needs is disown. First, run your process/script in the background (using &, or stopping it with ^Z and then restarting with bg):
$ long_operation_command &
[1] 1156
Note that at this point the process is still linked to the session and in case it is closed it will be killed.
You can the process attached to the session check running jobs in the background:
$ jobs
[1]+ Running long_operation_command
Therefore you can run disown in order to detach the processes from the session:
$ disown
You can confirm this checking the result of your script or command logging in again or checking with top the process still running.
Check also this because it could be interesting, i.e. the difference between nohup foo, foo & and $ foo & disown
P.S.
The direct answer to your question is yes, the cronjobs run even if you shutdown your laptop/shutdown the session.
For those of you who know what you're talking about I apologise for butchering the way that I'm going to phrase this question. I know nothing about bash whatsoever. With that caveat out of the way, let me get out my cleaver...
I am building a Rails app which has what's called a procfile which sets up any processes that need to be run in different environments
e.g.
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
redis: redis-server
worker: bundle exec sidekiq
proxylocal: bin/proxylocal_local
Each one of these lines specs a process to be run. It also expects a pid to be returned after the process spins up. The syntax is
process_name: process_invokation_script
However the last process, proxylocal, only actually starts a process in development. In production it doesn't do anything.
Unfortunately that causes the Procfile to choke as it needs a process ID returned. So is there some super-simple, zero-overhead process that I can spawn in that case just to keep the procfile happy?
The sleep command does nothing for a specified period of time, with very low overhead. Give it an argument longer than your code will run.
For example
sleep 2147483647
does nothing for 231-1 seconds, just over 68 years. I picked that number because any reasonable implementation of sleep should be able to handle it.
In the unlikely event that that doesn't work (say if you're on an old 16-bit system that can't sleep for more than 216-1 seconds), you can do a sleep in an infinite loop:
sh -c 'while : ; do sleep 30000 ; done'
This assumes that you need the process to run for a very long time; that depends on what your application needs to do with the process ID. If it's required to be unique as long as the application is running, you need something that will continue to run for a long time; if the process terminates, its PID can be re-used by another process.
If that's not a requirement, you can use sleep 0 or true, which will terminate immediately.
If you need to give the application a little time to get the process ID before the process terminates, something like sleep 10 or even sleep 1 might work, though determining just how long it needs to run can be tricky and error-prone.
If Heroku isn't doing anything with proxylocal I'm not sure why you'd even want this in your Procifle. I'm also a bit confused about whether you want to change the Procfile or what bin/proxylocal_local does and how you would even do that.
That being said, if you are able to do anything you like for production your script can just call cat and it will create a pid and then just sit waiting for the next command (which never comes).
For truly minimal overhead, you don't want to run any external commands. When the shell starts a command, it first forks itself, then the child shell execs the external command. If the forked child can run a builtin, you can skip the exec.
Start by creating a read-only fifo somewhere.
mkfifo foo
chmod 400 foo
Then, whenever you need a do-nothing process, just fork a shell which tries to read from the fifo. It's read-only, so no one can write to it, so all reads will block.
read < foo &
One of the problems, I face with supervisord is that when I have a command which in turn spawns another process, supervisord is not able to kill it.
For example I have a java process which when runs normally is like
$ zkServer.sh start-foreground
$ ps -eaf | grep zk
user 30404 28280 0 09:21 pts/2 00:00:00 bash zkServer.sh start-foreground
user 30413 30404 76 09:21 pts/2 00:00:10 java -Dzookeeper.something..something
The supervisord config file looks like:
[program:zookeeper]
command=zkServer.sh start-foreground
autorestart=true
stopsignal=KILL
These kind of processes which have multiple childs are not well handled by supervisord when it comes to stopping them from supervisorctl. So when I run this from the supervisord and try to stop it from supervisorctl, only the top level process gets killed but not the actual java process.
The same problem was encountered by Rick Hanlon II here: https://coderwall.com/p/4tcw7w
Option stopasgroup=true should be set in the program section for supervisord to stop not only the parent process but also the child processes.
The example is given as:
[program:some_django]
command=python manage.py runserver
directory=/dir/to/app
stopasgroup=true
Also, have in mind that you may have an older package of supervisord that does not have "stopasgroup" functionality.
I tried these Debian packages on Raspberry Pi:
supervisor_3.0a8 does not work.
supervisor_3.0b2-1 works as expected.
Doing the following early in the main bash script called by supervisord fixed the problem for me:
trap "kill -- -$$" EXIT
This kills the entire process group when the main script exits, such as when it is killed by supervisord.
A feature was recently added to supervisord to send SIGKILL to the whole process group. It's in github but not officially released yet.
If the process id is available in a file, you can use the pid-proxy program
try this supervisor program config:
stopasgroup=true
killasgroup=true
stopsignal=INT
The following article has an in-depth discussion of the problem:
http://veithen.github.io/2014/11/16/sigterm-propagation.html
You can also use priorities in /conf.d/your-configuration.conf file. For example, if you want to run zookeeper first and then kafka you can specify two programs.
Lower priority means that the program starts first and stops last.