execute bash script after successfully start systemd? - bash

I have /etc/systemd/system/tivalue.service with following content:
[Unit]
Description=Ti-Value Node
After=network.target
[Service]
Type=simple
PIDFile=/var/run/tivalue.pid
User=root
Group=root
ExecStart=/root/TiValue/tiValue --rpcuser=admin --rpcpassword=123456 --httpdendpoint=127.0.0.1:8080 --daemon
KillSignal=15
Restart=always
[Install]
WantedBy=multi-user.target
and also /etc/systemd/system/tivalue.curl.sh
So, how can I execute /etc/systemd/system/tivalue.curl.sh after successfully started tivalue.service?

Use an ExecStartPost= entry pointing at the script. See the ExecStartPre/ExecStartPost documentation.

Related

Systemctl service with variables show empty

When VM is created, a systemctl service is installed (script1.sh) runs, then later (script2.sh) runs, but variables in service are not updated.
Background
I have a systemctl service which is defined when VM is created. (script1.sh)
Later I collect variables values that I would pass to this script. (script2.sh)
script1.sh
cat <<-EOH > /lib/systemd/system/my_service.service
[Unit]
Description=My Service
StartLimitIntervalSec=600
[Service]
Type=simple
ExecStart=/bin/bash -c '/opt/bin/agent --host=localhost:${PORT} --debug=${_DEBUG}'
Restart=always
[Install]
WantedBy=multi-user.target
EOH
script2.sh
PORT=$(get_random_port)
_DEBUG=$(get_debug_value)
sudo systemctl enable my_service.service
sudo systemctl restart my_service.service
when VM is created, service is installed (script1.sh) runs, then later (script2.sh) runs, but I don't see PORT or _DEBUG getting updated.
Troubleshooting done:
pass set -a
print env and I can see PORT & _DEBUG with correct value
pass Environment to systemctl service.
[Unit]
Description=My Service
StartLimitIntervalSec=600
[Service]
Type=simple
Environment="PORT=${PORT}"
Environment="_DEBUG=${_DEBUG}"
ExecStart=/bin/bash -c '/opt/bin/agent --host=localhost:${PORT} --debug=${_DEBUG}'
Restart=always
[Install]
WantedBy=multi-user.target
I tried to overwrite the service and that works, but does not seem to be a solid solution.

Custom port in Systemd scrip

I'm trying to set a custom port in my Systemd script, here is my .service file:
[Unit]
Description=discord-stock-ticker
Wants=basic.target
After=basic.target network.target
Before=sshd.service
[Service]
SyslogIdentifier=discord-stock-ticker
StandardOutput=syslog
StandardError=syslog
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/opt/discord-stock-ticker/discord-stock-ticker -port 42069
Restart=always
[Install]
WantedBy=multi-user.target
I've tried a bunch of different option like --PORT=xxx or --server.port=xxx, but it still run 8080 port.
Did you run systemctl daemon-reload after editing the service file? You need to in order to "commit" the changes so to speak.

Run Airflow HA Scheduler as systemd services

I want to run 2 airflow schedulers for which i created a systemd service file ending with #.service.
Now if I try to run the service like
sudo systemctl start airflow-scheduler#{1..2}
Only one of the schedulers manages to run, while the other one runs into an error which says
sqlalchemy.exc.DatabaseError: (mysql.connector.errors.DatabaseError) 3572 (HY000): Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.
My service file looks like this:
[Unit]
Description=Airflow scheduler daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service
[Service]
EnvironmentFile=/etc/sysconfig/airflow
User=myuser
Group=myuser
Type=simple
ExecStart=/usr/local/bin/airflow scheduler
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
The problem was with mysql connector python. I used mysqldb instead in the airflow config file and it works fine now.

Instruct to execute an unit after completing another unit successfully

Friends.
I use cloud-config to install and configure DCOS cluster.
Normally "agentinstall.service" service takes 5 minutes to complete.
Is it possible to instruct to systemd to execute "agentconfigure.service" ONLY AFTER "agentinstall.service" completed?
#cloud-config
coreos:
units:
- name: "agentinstall.service"
command: "start"
content: |
[Unit]
Description=agent_setup
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/tmp
ExecStartPre=/bin/curl -o /tmp/dcos_install.sh http://bootstapnode-0.dev.myztro.internal:9090/dcos_install.sh
ExecStartPre=/bin/chmod 755 dcos_install.sh
ExecStart=/bin/bash dcos_install.sh slave
[Install]
WantedBy=multi-user.target
- name: "agentconfigure.service"
command: "start"
content: |
[Unit]
Description=agent_config
After=agentinstall.service
[Service]
Type=simple
User=root
WorkingDirectory=/opt/mesosphere/etc/
ExecStartPre=/bin/echo "MESOS_ATTRIBUTES=cluster:uploader" >> /opt/mesosphere/etc/mesos-slave-common
ExecStartPre=/bin/rm -f /var/lib/mesos/slave/meta/slaves/latest
ExecStart=/usr/bin/systemctl restart dcos-mesos-slave
[Install]
WantedBy=multi-user.target
Thank you.
This is very much possible with systemd using After/Before keywords.
you can use something like below
In agentconfigure.service, provide below instruction
After=agentinstall.service
Usually After ensures that the dependent service is launched after launch of given service.
Since you mentioned that agentinstall.service takes 5 minutes to complete, so you have to add Type=notify in agentinstall.service, and from your application do a sd_notify() when your processing is done.
Based on this systemd shall start next service i.e agentconfigure.service
Read more about same here
Read about sd_notify() here

Systemd Hdfs Service [hadoop] - startup

I have created a service that starts and stops my hdfs that is associated to my spark cluster.
the service :
[Unit]
Description=Hdfs service
[Service]
Type=simple
WorkingDirectory=/home/hduser
ExecStart=/opt/hadoop-2.6.4/sbin/start-service-hdfs.sh
ExecStop=/opt/hadoop-2.6.4/sbin/stop-service-hdfs.sh
[Install]
WantedBy=multi-user.target
The problem is when i start the service, it starts and stops just after been started !! :)
I think the problem is the type of the service, I don't really know what type to choose ...
Thank you.
Best regards
Threre are some issues in your config, that is why it is not working.
I'm running hadoop 2.7.3, hive 2.1.1, ubuntu 16.04 under hadoop user
HADOOP_HOME is /home/hadoop/envs/dwh/hadoop/
[Unit]
Description=Hadoop DFS namenode and datanode
After=syslog.target network.target remote-fs.target nss-lookup.target network-online.target
Requires=network-online.target
[Service]
User=hadoop
Group=hadoop
Type=forking
ExecStart=/home/hadoop/envs/dwh/hadoop/sbin/start-dfs.sh
ExecStop=/home/hadoop/envs/dwh/hadoop/sbin/stop-dfs.sh
WorkingDirectory=/home/hadoop/envs/dwh
Environment=JAVA_HOME=/usr/lib/jvm/java-8-oracle
Environment=HADOOP_HOME=/home/hadoop/envs/dwh/hadoop
TimeoutStartSec=2min
Restart=on-failure
PIDFile=/tmp/hadoop-hadoop-namenode.pid
[Install]
WantedBy=multi-user.target
Checklist:
user and user group is set
service type is fork
pid file is set, and this is actual pid that start-dfs.sh creates
environment variables are correct
an alternative stoppable oneshot service which contains hdfs and yarn altogether
[Unit]
Description=Hadoop DFS namenode and datanode & yarn service
After=syslog.target network-online.target
[Service]
User=hadoop
Group=hadoop
Type=oneshot
ExecStartPre=/home/hadoop/hadoop-2.10.1/sbin/start-dfs.sh
ExecStart=/home/hadoop/hadoop-2.10.1/sbin/start-yarn.sh
ExecStop=/home/hadoop/hadoop-2.10.1/sbin/stop-dfs.sh
ExecStopPost=/home/hadoop/hadoop-2.10.1/sbin/stop-yarn.sh
WorkingDirectory=/home/hadoop
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

Resources