Converting systemd script to .init script for CentOS 6 - shell

I'm not really good with shell scripting, by not really good I mean I don't know it at all.
I need to convert this systemd unit file to a .init script, it's for setting up nginx and uwsgi for serving a web-app.
[Unit]
Description=uWSGI instance to serve myproject
After=network.target
[Service]
User=user
Group=nginx
WorkingDirectory=/home/user/myproject
Environment="PATH=/home/user/myproject/myprojectenv/bin"
ExecStart=/home/user/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
[Install]
WantedBy=multi-user.target
CentOS 6 does not support systemd, please help.

In systems that don't support systemd you could use other supervisors, for example, if need something portable and also compatible with macOS/BSD you could use immortal.
This is a basic run.yml that could start uwsgi:
cmd: /home/user/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
cwd: /home/user/myproject/myprojectenv
log:
file: /var/log/my-project.log
You could also check the uWSGI examples from the docs, /etc/init/uwsgi.conf for example:
# simple uWSGI script
description "uwsgi tiny instance"
start on runlevel [2345]
stop on runlevel [06]
respawn
exec uwsgi --master --processes 4 --die-on-term --socket :3031 --wsgi-file /var/www/myapp.wsgi
In this case, is using Upstart, check this answer: https://serverfault.com/a/292000/94862

Related

redmine, puma, nginx and crontab

Wanted to make puma start automatically when system reboots.
This is my crontab line (standard user):
# m h dom mon dow command
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/local/bin:/usr/bin
HOME=/home/seven/
#reboot /home/seven/mars/start.sh
the file start.sh
#!/bin/bash
/usr/bin/screen -dmS mars /home/seven/mars/puma -e production
Is I run the file by myself:
./start.sh
All works fine, screen starts, puma starts. Brill
But if I reboot the machine, nothing happens, screen and puma are not loaded.
What am I doing wrong?
Many thanks
One possible way to run puma, is with systemd as proposed by Puma's team at
https://github.com/puma/puma/blob/master/docs/systemd.md
This way would also let you
restart the service by simply typing
systemctl restart redmine
and you can get status by
systemctl status redmine
so the output looks like:
● redmine.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/redmine.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-01-03 06:25:16 CET; 2 days ago
Main PID: 29598 (ruby)
Tasks: 6 (limit: 4915)
CGroup: /system.slice/redmine.service
└─29598 puma 4.2.1 (tcp://0.0.0.0:3000) [redmine]
Jan 03 06:25:17 srv-redmine puma[29598]: Puma starting in single mode...
Jan 03 06:25:17 srv-redmine puma[29598]: * Version 4.2.1 (ruby 2.6.3-p62), codename: Distant Airhorns
Jan 03 06:25:17 srv-redmine puma[29598]: * Min threads: 0, max threads: 16
Jan 03 06:25:17 srv-redmine puma[29598]: * Environment: development
Jan 03 06:25:19 srv-redmine puma[29598]: * Listening on tcp://0.0.0.0:3000
Jan 03 06:25:19 srv-redmine puma[29598]: Use Ctrl-C to stop
Place the following code into: /etc/systemd/system/redmine.service
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
# Preferably configure a non-privileged user
User=testuser
# Specify the path to your puma application root
WorkingDirectory=/home/testuser/redmine
# Helpful for debugging socket activation, etc.
# Environment=PUMA_DEBUG=1
# Setting secret_key_base for rails production environment. We can set other Environment variables the same way, for example PRODUCTION_DATABASE_PASSWORD
#Environment=SECRET_KEY_BASE=b7fbccc14d4018631dd739e8777a3bef95ee8b3c9d8d51f14f1e63e613b17b92d2f4e726ccbd0d388555991c9e90d3924b8aa0f89e43eff800774ba29
# The command to start Puma, use 'which puma' to get puma's bin path, specify your config/puma.rb file
ExecStart=/home/testuser/.rvm/wrappers/my_app/puma -C /home/testuser/redmine/config/puma.rb
Restart=always
KillMode=process
#RemainAfterExit=yes
#KillMode=none
[Install]
WantedBy=multi-user.target
Make sure to adjust user and path in above code to fit your system, by replacing testuser with your real Redmine user.
puma.rb file can look something like this:
port ENV['PORT'] || 3000
stdout_redirect '/home/testuser/redmine/log/puma.stderr.log', '/home/testtser/redmine/log/puma.stdout.log'
#daemonize true
#workers 3
#threads 5,5
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
For more info about puma confg take a look at:
https://github.com/puma/puma
Thanks to Aleksandar Pavić for point out that I can use systemd!!!
I am the type that if I want something to work in one way, I ignore everything else because I want thinks to work the way I intended in first place. So in short, me been stupid.
Systemd was not that simple in the end, but reading the puma documentation helped a lot.
My redmine.service file:
[Unit]
Description=Puma HTTP Server
After=network.target
# Uncomment for socket activation (see below)
#Requires=puma.socket
[Service]
Type=simple
#WatchdogSec=10
# Preferably configure a non-privileged user
User=seven
# Specify the path to your puma application root
WorkingDirectory=/home/seven/mars/
# The command to start Puma, use 'which puma' to get puma's bin path, specify your config/puma.rb file
ExecStart=/bin/bash -lc '/home/seven/.rvm/gems/ruby-2.7.2/bin/puma -C /home/seven/mars/config/puma.rb'
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
and my puma.rb
# config/puma.rb
workers Integer(ENV['PUMA_WORKERS'] || 2)
threads Integer(ENV['MIN_THREADS'] || 0), Integer(ENV['MAX_THREADS'] || 16)
rackup DefaultRackup
port ENV['PORT'] || 9292
environment ENV['RACK_ENV'] || 'production'
lowlevel_error_handler do |e|
Rollbar.critical(e)
[500, {}, ["An error has occurred, and engineers have been informed. Please reload the page. If you continue to have problems, contact hq#starfleet-command.co\n"]]
end
After this systemd started to work with one exception. I was still getting an error when I wanted to enable the process:
sudo systemctl enable redmine.service
Synchronizing state of redmine.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable redmine
update-rc.d: error: cannot find a LSB script for redmine
I needed to remove the redmine file in /etc/init.d/ and enable redmine.service again to re-create the start script
Now all works as it should :) Many thanks to all!!!
Your details are not sufficient to answer but in your case i think that you created crontab file manually. if so why you just don't use that following command to add a cron job:
crontab -e
If you want to add it manually, make sure that your file has execute mode, add it to /etc/cron.d and reload your cron service with sudo service cron reload.
But if it doesn't help you, try to see the logs with the following command:
journalctl -xe
This may help you to debug your job when your system reboots.
so, this is the start.sh script:
#!/bin/bash
DISPLAY=:0; export DISPLAY
SHELL=/bin/bash
PWD=/home/seven/mars
LOGNAME=seven
XDG_SESSION_TYPE=tty
MOTD_SHOWN=pam
HOME=/home/seven
LANG=C.UTF-8
USER=seven
# main script to start screen and after puma.sh
cd /home/seven/mars/ && /usr/bin/screen -dmS mars /home/seven/puma.sh
what I did not is, I am trying to run a second script form the one that cron is starting. The puma.sh
Content:
#!/bin/bash
DISPLAY=:0; export DISPLAY
SHELL=/bin/bash
PWD=/home/seven/mars
LOGNAME=seven
XDG_SESSION_TYPE=tty
MOTD_SHOWN=pam
HOME=/home/seven
LANG=C.UTF-8
USER=seven
# main script to start puma
cd /home/seven/mars/ && puma -e production -w 2 -t 0:16
when I run the start script manually: ./start.sh in Cli all works fine.
When I reboot the machine (ubuntu 20.04 LTS) the script is been triggered, screen starts but the second script is not starting.
Now, I didi try to put the puma -e production ... after the screen -dmS mars ... (without the second script line), but that ended up in screen not starting after reboot.
Again, the start.sh work perfectly when triggered manually under Cli, just not when triggered by Cron, well not fully.

How to properly override generated systemd unit file to start after a ZFS mount has mounted

I'm using Ubuntu 18.04.4 LTS which uses systemd, but the squid package packaged with this version of Ubuntu is configured to start via init.d. It starts and runs via systemctl start squid.service if I start it manually after the system has booted.
However, I'm using a ZFS mount point ("/media") to store the cache data, and during the boot process squid is starting before this mount point is active. Consequently I'm getting the error "Failed to verify one of the swap directories". Full output of systemctl status squid is here
I'd like to tell systemd to wait until after media.mount has completed in the most minimally invasive way possible (e.g. without modifying the /etc/init.d/squid file that is maintained by the package). To that end I created the /etc/systemd/system/squid.service.d/override.conf file like so:
% cat /etc/systemd/system/squid.service.d/override.conf
[Unit]
Wants=network.target network-online.target nss-lookup.target media.mount
After=network.target network-online.target nss-lookup.target media.mount
[Install]
WantedBy=multi-user.target
But squid is still starting too early.
Is what I want to do possible? Or do I have to bite the bullet and define a native /etc/systemd/system/squid.service file and remove the /etc/init.d/squid init script?

Running Go as a daemon webserver on CentOS 7

I am trying to migrate from PHP to Go and planning to drop nginx alltogether. But I don't know how to run the go http webserver as a deamon in the background and I also don't know how to automatically start the webserver if I make a reboot, or how to kill the process.
With nginx all I do is
$ systemctl start nginx.service
$ systemctl restart nginx.service
$ systemctl stop nginx.service
$ systemctl enable nginx.service
$ systemctl disable nginx.service
This is very convenient, but it seems like I can't do this with Go http server. I have to compile and run it as any other Go program. What solutions do exist for these concerns?
This is less of a Go question and more of a Systems Administration question. There are ways to add a command to systemd (like in this blog post).
Personally, I prefer to keep my applications separate from my services, so I tend to use supervisord for my programs that tend to be started, stopped, or restarted frequently. The documentation for supervisord is pretty straightforward, but essentially you can create a config file to describe the services you want to run, the command used to run it (such as /path/to/go/binary -flag) how you want to handle starting, stopping, failure recovery, logging, etc....

Systemd Service not starting up my application

I am new to systemd service scripts. I am trying to start my application from systemd service scripts. My application is a process that in turn invokes multiple process that includes Qt GUI as one of its child. But the service downt starting up my application.
This is how my service looks like:
[Unit]
Description=/etc/rc.d/rc.local Compatibility
ConditionFileIsExecutable=/etc/rc.d/rc.local
After=network.target
[Service]
Type=forking
ExecStart=/etc/rc.d/rc.local start
SysVStartPriority=99
rc.local script looks like:
#!/bin/bash
export DISPLAY=:0
sleep 5
cd /var/MINC3/apps
./PMonTsk
So when try to run the command "systemctl start rc-local.service", the command executes the script but doesnt invoke my application. If I replace some other QT GUI sample application in the plcae of my application in rc.local, it is working fine. Please help me on sorting this issue.
If you add
[Install]
WantedBy=multi-user.target
I think it will work ;)
I found solution for the above problem. I modified my service in the following way. It works fine after the modification.
[Unit]
Description=/etc/rc.d/rc.local Compatibility
ConditionFileIsExecutable=/etc/rc.d/rc.local
After=network.target
[Service]
Type=forking
ExecStart=/etc/rc.d/rc.local start
ControlGroup=cpu:/
SysVStartPriority=99

Unable to run Tomcat 7 as service inside container on CoreOS

I am trying to setup tomcat 7 on Digital Ocean CoreOS machine but facing some problem, not sure how to solve them. I am following below tutorial provided by the Digital Ocean to setup Apache.
https://www.digitalocean.com/community/tutorials/how-to-create-and-run-a-service-on-a-coreos-cluster
I created docker container and run it using following command.
docker run -i -t ubuntu:14.04 /bin/bash
I was successfully able to install tomcat 7 by using below commands. (I followed this tutorial to setup tomcat 7 within the docker container: https://www.digitalocean.com/community/tutorials/how-to-install-apache-tomcat-7-on-ubuntu-14-04-via-apt-get)
sudo apt-get update
sudo apt-get install tomcat7
Then I can created service unit file named as tomcat#.service
[Unit]
Description=Tomcat 7 web server service
After=etcd.service After=docker.service
Requires=tomcat-discovery#%i.service
[Service]
TimeoutStartSec=0 KillMode=none
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker kill tomcat%i
ExecStartPre=-/usr/bin/docker rm tomcat%i
ExecStartPre=/usr/bin/docker pull attacomsian/tomcat
ExecStart=/usr/bin/docker run –name tomcat%i -p ${COREOS_PUBLIC_IPV4}:%i:8080 attacomsian/tomcat `service tomcat7 start` -D FOREGROUND
ExecStop=/usr/bin/docker stop tomcat%i
[X-Fleet]
X-Conflicts=tomcat#*.service
Then I created tomcat-discovery#.service to register service states with Etcd as below
[Unit]
Description=Announce Tomcat#%i service
BindsTo=tomcat#%i.service
[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c “while true; do etcdctl set /announce/services/tomcat%i ${COREOS_PUBLIC_IPV4}:%i –ttl 60; sleep 45; done”
ExecStop=/usr/bin/etcdctl rm /announce/services/tomcat%i
[X-Fleet]
X-ConditionMachineOf=tomcat#%i.service
I submitted and loaded files to Fleet as below
fleetctl submit tomcat#.service tomcat-discovery#.service
fleetctl load tomcat#8080.service
fleetctl load tomcat-discovery#8080.service
Everything worked fine so far. I did not see any error. But when I tried to run the service as below
fleetctl start tomcat#8080.service
But it did not started. I can see it is appearing as dead.
I am new to CoreOS and still learning. I am managing servers at Digital Ocean and I quite about it quite well. I googled about this issue but did not found any help. I personally think following line is actually causing the trouble.
ExecStart=/usr/bin/docker run –name tomcat%i -p ${COREOS_PUBLIC_IPV4}:%i:8080 attacomsian/tomcat `service tomcat7 start` -D FOREGROUND
I would really appreciate any kind help to get this up.
Many Thanks
Attacomsian
I was going so suggest you take a look at what others have done and then discovered you have posted a similar question on the Docker Hub registry.
Did you take a look at the Docker file used by the tutum/tomcat image?
https://github.com/tutumcloud/tutum-docker-tomcat/blob/master/7.0/Dockerfile
https://github.com/tutumcloud/tutum-docker-tomcat/blob/master/7.0/run.sh
It runs a script called "run.sh" that runs tomcat in the foreground.
The thing that is tricky to understand is that Docker is not a virtual machine and therefore does not have any services running. You must run the docker processes explicitly or setup a process manager like runit or supervisord.
Hope this helps.

Resources