redmine, puma, nginx and crontab - bash

Wanted to make puma start automatically when system reboots.
This is my crontab line (standard user):
# m h dom mon dow command
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/local/bin:/usr/bin
HOME=/home/seven/
#reboot /home/seven/mars/start.sh
the file start.sh
#!/bin/bash
/usr/bin/screen -dmS mars /home/seven/mars/puma -e production
Is I run the file by myself:
./start.sh
All works fine, screen starts, puma starts. Brill
But if I reboot the machine, nothing happens, screen and puma are not loaded.
What am I doing wrong?
Many thanks

One possible way to run puma, is with systemd as proposed by Puma's team at
https://github.com/puma/puma/blob/master/docs/systemd.md
This way would also let you
restart the service by simply typing
systemctl restart redmine
and you can get status by
systemctl status redmine
so the output looks like:
● redmine.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/redmine.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-01-03 06:25:16 CET; 2 days ago
Main PID: 29598 (ruby)
Tasks: 6 (limit: 4915)
CGroup: /system.slice/redmine.service
└─29598 puma 4.2.1 (tcp://0.0.0.0:3000) [redmine]
Jan 03 06:25:17 srv-redmine puma[29598]: Puma starting in single mode...
Jan 03 06:25:17 srv-redmine puma[29598]: * Version 4.2.1 (ruby 2.6.3-p62), codename: Distant Airhorns
Jan 03 06:25:17 srv-redmine puma[29598]: * Min threads: 0, max threads: 16
Jan 03 06:25:17 srv-redmine puma[29598]: * Environment: development
Jan 03 06:25:19 srv-redmine puma[29598]: * Listening on tcp://0.0.0.0:3000
Jan 03 06:25:19 srv-redmine puma[29598]: Use Ctrl-C to stop
Place the following code into: /etc/systemd/system/redmine.service
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
# Preferably configure a non-privileged user
User=testuser
# Specify the path to your puma application root
WorkingDirectory=/home/testuser/redmine
# Helpful for debugging socket activation, etc.
# Environment=PUMA_DEBUG=1
# Setting secret_key_base for rails production environment. We can set other Environment variables the same way, for example PRODUCTION_DATABASE_PASSWORD
#Environment=SECRET_KEY_BASE=b7fbccc14d4018631dd739e8777a3bef95ee8b3c9d8d51f14f1e63e613b17b92d2f4e726ccbd0d388555991c9e90d3924b8aa0f89e43eff800774ba29
# The command to start Puma, use 'which puma' to get puma's bin path, specify your config/puma.rb file
ExecStart=/home/testuser/.rvm/wrappers/my_app/puma -C /home/testuser/redmine/config/puma.rb
Restart=always
KillMode=process
#RemainAfterExit=yes
#KillMode=none
[Install]
WantedBy=multi-user.target
Make sure to adjust user and path in above code to fit your system, by replacing testuser with your real Redmine user.
puma.rb file can look something like this:
port ENV['PORT'] || 3000
stdout_redirect '/home/testuser/redmine/log/puma.stderr.log', '/home/testtser/redmine/log/puma.stdout.log'
#daemonize true
#workers 3
#threads 5,5
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
For more info about puma confg take a look at:
https://github.com/puma/puma

Thanks to Aleksandar Pavić for point out that I can use systemd!!!
I am the type that if I want something to work in one way, I ignore everything else because I want thinks to work the way I intended in first place. So in short, me been stupid.
Systemd was not that simple in the end, but reading the puma documentation helped a lot.
My redmine.service file:
[Unit]
Description=Puma HTTP Server
After=network.target
# Uncomment for socket activation (see below)
#Requires=puma.socket
[Service]
Type=simple
#WatchdogSec=10
# Preferably configure a non-privileged user
User=seven
# Specify the path to your puma application root
WorkingDirectory=/home/seven/mars/
# The command to start Puma, use 'which puma' to get puma's bin path, specify your config/puma.rb file
ExecStart=/bin/bash -lc '/home/seven/.rvm/gems/ruby-2.7.2/bin/puma -C /home/seven/mars/config/puma.rb'
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
and my puma.rb
# config/puma.rb
workers Integer(ENV['PUMA_WORKERS'] || 2)
threads Integer(ENV['MIN_THREADS'] || 0), Integer(ENV['MAX_THREADS'] || 16)
rackup DefaultRackup
port ENV['PORT'] || 9292
environment ENV['RACK_ENV'] || 'production'
lowlevel_error_handler do |e|
Rollbar.critical(e)
[500, {}, ["An error has occurred, and engineers have been informed. Please reload the page. If you continue to have problems, contact hq#starfleet-command.co\n"]]
end
After this systemd started to work with one exception. I was still getting an error when I wanted to enable the process:
sudo systemctl enable redmine.service
Synchronizing state of redmine.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable redmine
update-rc.d: error: cannot find a LSB script for redmine
I needed to remove the redmine file in /etc/init.d/ and enable redmine.service again to re-create the start script
Now all works as it should :) Many thanks to all!!!

Your details are not sufficient to answer but in your case i think that you created crontab file manually. if so why you just don't use that following command to add a cron job:
crontab -e
If you want to add it manually, make sure that your file has execute mode, add it to /etc/cron.d and reload your cron service with sudo service cron reload.
But if it doesn't help you, try to see the logs with the following command:
journalctl -xe
This may help you to debug your job when your system reboots.

so, this is the start.sh script:
#!/bin/bash
DISPLAY=:0; export DISPLAY
SHELL=/bin/bash
PWD=/home/seven/mars
LOGNAME=seven
XDG_SESSION_TYPE=tty
MOTD_SHOWN=pam
HOME=/home/seven
LANG=C.UTF-8
USER=seven
# main script to start screen and after puma.sh
cd /home/seven/mars/ && /usr/bin/screen -dmS mars /home/seven/puma.sh
what I did not is, I am trying to run a second script form the one that cron is starting. The puma.sh
Content:
#!/bin/bash
DISPLAY=:0; export DISPLAY
SHELL=/bin/bash
PWD=/home/seven/mars
LOGNAME=seven
XDG_SESSION_TYPE=tty
MOTD_SHOWN=pam
HOME=/home/seven
LANG=C.UTF-8
USER=seven
# main script to start puma
cd /home/seven/mars/ && puma -e production -w 2 -t 0:16
when I run the start script manually: ./start.sh in Cli all works fine.
When I reboot the machine (ubuntu 20.04 LTS) the script is been triggered, screen starts but the second script is not starting.
Now, I didi try to put the puma -e production ... after the screen -dmS mars ... (without the second script line), but that ended up in screen not starting after reboot.
Again, the start.sh work perfectly when triggered manually under Cli, just not when triggered by Cron, well not fully.

Related

Starting an opensplice publisher via systemd does not publish data

I have an opensplice publisher on Ubuntu 20.04 that is started via systemd.
If the publisher starts via systemd then the data is not pubished, but also no errors are reported or present in the opensplice log files.
The publisher works if I run it from a command line or if I stop and restart the service.
The QoS are the same for the publisher and subscriber.
The publisher and subscriber applications are running on different machines.
There are no other participants on the network. All the machines are rebooted and the order of reboot does not change the observed behaviour.
The systemd service is:
[Unit]
Description=Publisher Process
Documentation=
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
WorkingDirectory=/opt/publisher/bin
ExecStart=/opt/publisher/bin/publisher.sh
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target
The publisher.sh is:
#!/bin/bash
cd /opt/publisher/bin
source bashrc_local
# We just keep running the application (in case of a crash)
while true; do
./publisher
sleep 15
done
I have a work around that feels a little bit naff.
#!/bin/bash
cd /opt/publisher/bin
source bashrc_local
timeout 30 ./remote_processor
killall remote_processor
# We just keep running the application (in case of a crash)
while true; do
./publisher
sleep 15
done
Any ideas on how I can remove my work around?
Edit 16 Sept 22
The issue appears to be systemd start order and dependencies as I have run into the same issue with a program publishing data via UDP which is not using DDS.
Changing the dependencies so the services are started just before the user login does not help.
check your environment variables as systemd will not run with the same environment as your bash console
in particular have you set the OSPL_URI variable to point at the config?
if using the commercial version, OSPL_HOME and ADLINK_LICENSE will also need to be set
Does the PATH variable include your OSPL shared libraries?
These are all setup by running the $OSPL_HOME\release.com script in your bash session
I tend to manually add the required ones to the service file
e.g.
Environment=OSPL_URI=file:///opt/ospl.xml

Executing Coral Board Demonstration Model on Boot with Custom Systemd Service

I am trying to configure my coral board to boot to the Coco object detection model using a custom systemd service. I've created the executable file, unit file and then enabled the service. The intent is for the camera feed to display to a monitor, but when I power on the board, the monitor only displays a bluish background (I assume the "home screen" of the board).
The executable file:
edgetpu_detect \
--model mobilenet_ssd...
--labels coco...
The unit file:
[Unit]
Description=systemd service.
After=weston.target
[Service]
PAMName=login
Type=simple
User=mendel
WorkingDirectory=/home/mendel
ExecStart=/bin/bash /usr/bin/test_service.sh
Restart=always
[Install]
WantedBy=multi-user.targer
Status of service after enabling and after powering on:
mendel#jumbo-tang:/etc/system$ sudo systemctl status myservice.service
myservice.service - systemd service.
Loaded: loaded (/etc/systemd/system/system/myservice.service; enabled; vendor preset
Active: active (running) since Mon 2020-01-06 03:32:03 UTC; 1s ago
Main PID: 4847 (bash)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/myservice.service
4847 /bin/bash /usr/bin/test_service.sh
Jan 06 03:32:03 jumbo-tang systemd[1]: myservice.service: Service hold-off time
Jan 06 03:32:03 jumbo-tang systemd[1]: Stopped Example systemd service..
Jan 06 03:32:03 jumbo-tang systemd[1]: Started Example systemd service..
Jan 06 03:32:03 jumbo-tang systemd[4847]: pam_unix(login:session): session opene
The executable is saved to /usr/bin, and was made executable with sudo chmod +x /usr/bin/test_service.sh
The unit file was saved to /etc/systemd/system, and was given permissions with sudo chmod 644 /etc/systemd/system/myservice.service
I'm curious to know if my executable cannot simply contain the code I'd normally use to launch a model, like I've done, or if my unit file is properly configured, or what else could be wrong I'm not thinking of.
Any help is appreciated!
I believe I've answered you via coral-support, we discussed that you're most likely just missing a couple things:
1) When starting a systemd service especially at boot, sometimes not all environment variable are loaded, in this case, you may need to add the line:
Environment=DISPLAY=:0
before ExecStart. However, I don't suspect that is is the issue, because the process does waits on weston.target, which should already waits on environment variables.
2) This one is a lot more complicated than the previous one but you misspelled
"target" in "WantedBy=multi-user.targer" (joking, of course)
I'm showing the steps here again as an example for future references.
1) create a file call detects.service with the following contents:
[Unit]
Description=systemd auto face detection service
After=weston.target
[Service]
PAMName=login
Type=simple
User=mendel
WorkingDirectory=/home/mendel
Environment=DISPLAY=:0
ExecStart=/bin/bash /usr/bin/detect_service.sh
Restart=always
[Install]
WantedBy=multi-user.target
2) mv the file /lib/systemd/system/detects.service
$ sudo mv detects.service /lib/systemd/system/detects.service
3) create a file call detect_service.sh with the following content
edgetpu_detect --model fullpath/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite --label fullpath/coco_labels.txt
4) make it executable and mv it to /usr/bin
$ sudo chmod u+x detect_service.sh
$ sudo mv detect_service.sh /usr/bin
5) enable the service with systemctl
$ sudo systemctl enable detects.service

Converting systemd script to .init script for CentOS 6

I'm not really good with shell scripting, by not really good I mean I don't know it at all.
I need to convert this systemd unit file to a .init script, it's for setting up nginx and uwsgi for serving a web-app.
[Unit]
Description=uWSGI instance to serve myproject
After=network.target
[Service]
User=user
Group=nginx
WorkingDirectory=/home/user/myproject
Environment="PATH=/home/user/myproject/myprojectenv/bin"
ExecStart=/home/user/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
[Install]
WantedBy=multi-user.target
CentOS 6 does not support systemd, please help.
In systems that don't support systemd you could use other supervisors, for example, if need something portable and also compatible with macOS/BSD you could use immortal.
This is a basic run.yml that could start uwsgi:
cmd: /home/user/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
cwd: /home/user/myproject/myprojectenv
log:
file: /var/log/my-project.log
You could also check the uWSGI examples from the docs, /etc/init/uwsgi.conf for example:
# simple uWSGI script
description "uwsgi tiny instance"
start on runlevel [2345]
stop on runlevel [06]
respawn
exec uwsgi --master --processes 4 --die-on-term --socket :3031 --wsgi-file /var/www/myapp.wsgi
In this case, is using Upstart, check this answer: https://serverfault.com/a/292000/94862

start Faye server on system start

i want to run Faye server on system startup
i'm trying with this in
/etc/init/faye.conf
description "faye"
author "email#gmail.com"
description "Chat Server for In-house chat rooms"
start on filesystem or runlevel [2345]
stop on run level [!2345]
cd /var/App
exec /usr/local/rvm/gems/ruby-2.0.0-p451/bin/rackup /var/App/faye.ru -s thin -E production
but it not working.
when i'm excuting command
sh faye.conf
then working fine.
and even working via irb console
`/usr/local/rvm/gems/ruby-2.0.0-p451/bin/rackup /var/App/faye.ru -s thin -E production -D`
any idea where is problem why init script not working self ?
after some investigation i found error:
/usr/bin/env: ruby_executable_hooks: No such file or directory

Use God with multiple applications and start them automatically after a reboot

I'm currently trying to monitor various processes/daemons of in total three Rails/Rack Applications using god. Monitoring works great, the problem is that i'm not able to configure god to autostart all processes after a reboot.
My Setup: I'm running a Linux VPS with Centos & Plesk.
I have a non-root linux user "deployer" which is used to deploy & run the three Rails/Rack Applications. Two applications are running with the passenger apache module, the third application uses a thin Server (that's necessary because the application doesen't work with apache). The two Rails applications, that are using passenger have additional rake tasks that run in the background - these and the thin Server are monitored by god.
The god gem is specified in the Gem File of all three Applications.
In every deploy.rb file i have a method that looks like
namespace :misc do
desc "restart woekers using gog; restart webserver"
task :restart, roles: [:web, :resque] do
run "touch #{current_path}/tmp/restart.txt"
god.all.start
god.all.reload
god.all.terminate
god.all.start
end
end
After a reboot of the server, if i run the cap misc:restart for all three applications manually, all processes are booted up and monitored correctly.
Every try to start god automatically on boot and start all necessary processes failed so far.
I tried many different things, but nothing worked. My approach so far was to create a cron task with #reboot that runs three of the following script:
#!/bin/bash -l
cd /path/to/app/ && bundle exec god -c /path/to/app/config/god/resque.god && bundle exec god load /path/to/app/config/god/resque.god && bundle exec god start resque
This works great for the first application: god and all processes are started.
When the script is executed for the second application (of course with the with the correct paths), god is not able to start the tasks.
I enabled logging in god and the error message (in case of the Rack Application) was "thin: command not found".
When I'm starting the Rack Application first, thin is started correctly and the commands of the other task are not found.
I don't get whats wrong with my configuration. I added the bundle exec command in front of the god calls as you can see above (so the commands should be executed in the environment of their respective application) - nevertheless, it just doesen't work.
I would really appreciate if anyone could help me getting god to start automatically.
If you need further information please don't hesitate to ask!
Thanks in Advance!
Am working on something similar and took this approach:
Use upstart or something similar to launch the god daemon on system boot, for me this is done like so:
/etc/init/god.conf
description "god"
start on runlevel [2]
stop on runlevel [016]
console owner
exec /usr/local/rvm/bin/rvm_god -c /etc/god
respawn
That guy runs god specifying one ruby god configuration file with the -c option:
/etc/god
# Load the configs
God.load "/home/dangerousbeans/kitten_smusher/config/config.god"
God.load "/home/dangerousbeans/irc_nommer/config/config.god"
This ruby dude loads in the individual application god configs and running God.load causes them to boot up.
The individual files look like this I guess as I'm using RVM:
/home/dangerousbeans/irc_nommer/config/config.god
God.watch do |w|
w.dir = "/home/dangerousbeans/irc_nommer"
w.name = "IRCnommer"
# scary rvm magic begins
gemsets_path = [
"/home/dangerousbeans/.rvm/gems/ruby-1.9.3-p125#irc_nommer/bin",
"/home/dangerousbeans/.rvm/rubies/ruby-1.9.3-p125/bin",
"/home/dangerousbeans/.rvm/bin",
ENV['PATH'] # inherit this
].join(':')
w.env = {
"PATH" => gemsets_path,
"GEM_PATH" => "/home/dangerousbeans/.rvm/gems/ruby-1.9.3-p125#irc_nommer"
}
# scary rvm magic ends
w.log = "/tmp/ircnommer.log"
w.start = "ruby /home/dangerousbeans/irc_nommer/irc_nommer.rb"
w.keepalive
end
The key point is the environments is different between manual and automatic while god execute the [start] command.
So you can add command env to the command. like:
God.watch do |w|
w.start = "cd #{your_app_directory}; env >> log/god.log; your-real-command >> log/god.log 2>&1"
end
There'll be some differences as you type env in the same directory.
Check the difference and add required/correct paragraph to god's env.
Today I encounter an issue, I deployed 2 rails apps in 1 server, both uses god. The App#2 can't startup the command correctly. After do above test I found the cause: God hold an environment variable [BUNDLE_GEMFILE] that points to App#1. So I add a simple line then error gone away:
God.watch do |w|
w.env = {
"BUNDLE_GEMFILE" => "#{$rails_root}/Gemfile"
}
end

Resources