I want to disable a systemd service auto-restart after it crashes. I did some search and I found that "Restart" property of .service files has some options, but none of them does not mean "disable". Could anyone help please?
If you don't want the service to start after crash, delete the Restart option of the service file. Then run:
systemctl daemon-reload
An enabled service is a service that starts at boot. To disable start at boot run:
sudo systemctl disable <service_name>
Related
I have a strange situation with a web service hosted on a debian instance, that sometimes stops, and does not restart automatically. However, when SSH-ing into the machine, the service seems to restart automatically.
I originally wanted the service to always be up and restart, could you help me figure out what's wrong ? I may have misunderstood how systemctl --user services are meant to run.
The service in question is a Rails application running with passenger standalone, but I believe the problem might just be a misconfiguration in the systemd file.
My systemd file
# .config/systemd/user/my_service.service
[Unit]
Description=passenger with rails server for my_service (production)
After=syslog.target network.target
[Service]
Type=forking
PrivateTmp=yes
WorkingDirectory=/websites/xxx/current
PIDFile=/websites/xxx/shared/tmp/pids/passenger.8080.pid
ExecStart=/home/outscale/.asdf/shims/bundle exec passenger start /websites/xxx/current
ExecStop=/home/outscale/.asdf/shims/bundle exec passenger stop /websites/xxx/current
MemoryAccounting=true
MemoryLimit=3584M
Restart=always
RestartSec=1
TimeoutStopSec=30
KillMode=mixed
StandardInput=null
SyslogIdentifier=%p
# Environment
Environment="RAILS_ENV=production"
Environment="NODE_ENV=production"
[Install]
WantedBy=default.target
I have copied this installed the service using
systemctl --user daemon-reload
systemctl --user enable my_service
Was I meant to use something else, like systemctl --global enable unit ? I want my service to run with the "outscale" user installing the service (otherwise my version manager asdf does not work as expected)
I found the solution to my problem there. I had misunderstood the behavior of the --user flag (VS using the User= property in the service file)
I was running under debian 11 and as stated in the mentioned answer, my service would not necessarily shut down after logging out of ssh, but only at some point (not clear if it happened when my service crashed for the first time or some sort of garbage collection)
And the service would boot up again magically when SSHing in the instance as a reaction to a user login and starting all the services.
So the fix was to reimplement the services using User= and without the --user flag to make it a globally available service.
Could anybody help me how to shutdown elasticSearch completely ! It starts automatically when system starts.
Yes, it should because you are initialing it in the config file, which fires every time the system starts.
To answer your question, I believe this answer should help.
It probably runs as a service. If it is on linux remove the service file, usually on /etc/init.d/elasticsearch
If it is on windows - there is a service.bat file on the installation/bin folder, you can uninstall using:
service.bat remove
If you are using Ubuntu 15.04+
systemctl disable elasticsearch
For Ubuntu < 15.04
To toggle a service from starting or stopping permanently you would need to:
echo manual | sudo tee /etc/init/SERVICE.override
where the stanza manual will stop Upstart from automatically loading the service on next boot. Any service with the .override ending will take precedence over the original service file. You will only be able to start the service manually afterwards. If you do not want this then simply delete the .override.
For more details, you may check this
My Elasticsearch server is already running as a service. I can start and run it like so:
sudo service elasticsearch start
sudo service elasticsearch stop
However I would like to have it always running. Currently I need to start it manually on every system boot. I have already tried to register it as a deamon with the following commands:
sudo update-rc.d elasticsearch defaults
sudo update-rc.d elasticsearch defaults 95 10
I still need to start the Elasticsearch server manually. What do I need to do to run Elasticsearch as a daemon or start it at all on system startup? Since it is my local development environment, I would not need Elasticsearch as a daemon. I just need to start it on the startup of my system.
Not sure if you've found the answer or not (I'm assuming so), but for anyone who has not; you can use:
sudo systemctl enable elasticsearch.service
Is there some way to trigger an event (e.g. running a script to push some logs to S3) when an EC2 instance is stopped/terminated?
I have looked into triggering the script using a service in /usr/lib/systemd/system but I haven't had any luck with that yet. I have heard that networking capabilities on the instance can be shutdown before a service is triggered and if true, that could be why the script is not executing correctly.
So the answer is not really AWS specific, but it is working for me now (tested on EC2 instance stopping and terminating).
I've created a system.d service file:
/usr/lib/systemd/system/my_shutdown.service
[Unit]
Description=my_shutdown Service
Before=shutdown.target reboot.target halt.target
Requires=network-online.target network.target
[Service]
KillMode=none
ExecStart=/bin/true
ExecStop=/path/to/my_script.sh
RemainAfterExit=yes
Type=oneshot
[Install]
WantedBy=multi-user.target
Added this service to multi-user.target:
systemctl enable my_shutdown.service
Alternatively you can manually create the symlink:
ln -s /usr/lib/systemd/system/my_shutdown.service /etc/systemd/system/multi-user.target.wants/my_shutdown.service
Started the service and tested by stopping/terminating the instance.
systemctl start my_shutdown.service
My understanding:
Description: a description of our service.
Before: we want our service to stop before these targets are started.
Requires: our service requires that network capabilities are available. These targets must not be stopped before our service starts/stops.
KillMode: none; do not kill our process.
ExecStart: /bin/true; a command that does nothing but returns a success. Run when are service is started.
ExecStop: the script to run. Run when are service is being stopped.
RemainAfterExit: consider our service active even when all its processes exited.
Type: oneshot; it is expected that the process has to exit before systemd starts follow-up units.
WantedBy: the target we want to add our service to.
References:
https://www.freedesktop.org/software/systemd/man/systemd.service.html
https://www.freedesktop.org/software/systemd/man/systemd.kill.html#
https://www.freedesktop.org/software/systemd/man/systemd.special.html
https://www.freedesktop.org/software/systemd/man/systemd.target.html
You can trigger events, such as pushing logs to S3 on specific events, with CloudWatch... Learn more here: https://aws.amazon.com/cloudwatch/
I know how to sentry start.
But when I change the sentry.conf.py, how can I make it work?
I run sentry help and can not find sentry stop or restart commond.
Is there a way to restart the sentry server?
I just ran into this problem myself. I was using supervisor to start my sentry server, and for some reason it was not killing sentry when I stopped supervisor. To fix this, I ran sudo netstat -tulpn | grep 9000 to find the process id that was still running. For me, it was gunicorn. Kill that process then start the server again and your new settings should take effect.
I'm using systemctl to manage sentry.
Firstly, create an executable file. run_worker
#!/bin/bash
source ~/.sentry/bin/activate
SENTRY_CONF=~/sentry sentry run worker>/var/log/sentry_worker.log 2>&1
Then, create service files. just like:
[Service]
ExecStart={YourPath}/sentry/run_worker
Restart=always
StartLimitInterval=0
[Install]
WantedBy=default.target
Create sentry_web.service sentry_cron.service likewise and use
systemctl --user restart sentry_*
to restart.
If you are running your workers using supervisor, just run the commands to restart all the workers:
supervisorctl
restart all
Or if you want to restart single worker enter:
supervisorctl
status
to get the list of the workers and use:
restart worker_name
It will restart the sentry process and enable your new configurations.