Run Airflow HA Scheduler as systemd services - systemd

I want to run 2 airflow schedulers for which i created a systemd service file ending with #.service.
Now if I try to run the service like
sudo systemctl start airflow-scheduler#{1..2}
Only one of the schedulers manages to run, while the other one runs into an error which says
sqlalchemy.exc.DatabaseError: (mysql.connector.errors.DatabaseError) 3572 (HY000): Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.
My service file looks like this:
[Unit]
Description=Airflow scheduler daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service
[Service]
EnvironmentFile=/etc/sysconfig/airflow
User=myuser
Group=myuser
Type=simple
ExecStart=/usr/local/bin/airflow scheduler
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target

The problem was with mysql connector python. I used mysqldb instead in the airflow config file and it works fine now.

Related

set Requisite in systemd service,but it can not work

ubuntu:20.04
flask_app.service
[Unit]
Description=python web service(using gunicorn WSGI)
After=syslog.target network.target
Requisite=mysql.service
[Service]
Type=simple
PIDFile=/data/project/xhxf/log/flask_app.pid
WorkingDirectory=/data/project/xhxf/
ExecStart=/usr/bin/gunicorn -b localhost:7071 app:app
[Install]
WantedBy=multi-user.target
status of flask_app.service
status of mysql.service
when I start flask_app.service and I don't start mysql.service,mysql.service is still inactive, but flask_app.service is active.
if I also add After=mysql.service in the unit module,then I start flask_app.service and I don't start mysql.service,both mysql.service and flask_app.service are inactive. Why? Requisite= Can't it be used alone?Does it have to be used with After= to make it work
I add After=mysql.service in the unit module,then I start flask_app.service and I don't start mysql.service,both mysql.service and flask_app.service are inactive

Custom port in Systemd scrip

I'm trying to set a custom port in my Systemd script, here is my .service file:
[Unit]
Description=discord-stock-ticker
Wants=basic.target
After=basic.target network.target
Before=sshd.service
[Service]
SyslogIdentifier=discord-stock-ticker
StandardOutput=syslog
StandardError=syslog
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/opt/discord-stock-ticker/discord-stock-ticker -port 42069
Restart=always
[Install]
WantedBy=multi-user.target
I've tried a bunch of different option like --PORT=xxx or --server.port=xxx, but it still run 8080 port.
Did you run systemctl daemon-reload after editing the service file? You need to in order to "commit" the changes so to speak.

execute bash script after successfully start systemd?

I have /etc/systemd/system/tivalue.service with following content:
[Unit]
Description=Ti-Value Node
After=network.target
[Service]
Type=simple
PIDFile=/var/run/tivalue.pid
User=root
Group=root
ExecStart=/root/TiValue/tiValue --rpcuser=admin --rpcpassword=123456 --httpdendpoint=127.0.0.1:8080 --daemon
KillSignal=15
Restart=always
[Install]
WantedBy=multi-user.target
and also /etc/systemd/system/tivalue.curl.sh
So, how can I execute /etc/systemd/system/tivalue.curl.sh after successfully started tivalue.service?
Use an ExecStartPost= entry pointing at the script. See the ExecStartPre/ExecStartPost documentation.

Instruct to execute an unit after completing another unit successfully

Friends.
I use cloud-config to install and configure DCOS cluster.
Normally "agentinstall.service" service takes 5 minutes to complete.
Is it possible to instruct to systemd to execute "agentconfigure.service" ONLY AFTER "agentinstall.service" completed?
#cloud-config
coreos:
units:
- name: "agentinstall.service"
command: "start"
content: |
[Unit]
Description=agent_setup
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/tmp
ExecStartPre=/bin/curl -o /tmp/dcos_install.sh http://bootstapnode-0.dev.myztro.internal:9090/dcos_install.sh
ExecStartPre=/bin/chmod 755 dcos_install.sh
ExecStart=/bin/bash dcos_install.sh slave
[Install]
WantedBy=multi-user.target
- name: "agentconfigure.service"
command: "start"
content: |
[Unit]
Description=agent_config
After=agentinstall.service
[Service]
Type=simple
User=root
WorkingDirectory=/opt/mesosphere/etc/
ExecStartPre=/bin/echo "MESOS_ATTRIBUTES=cluster:uploader" >> /opt/mesosphere/etc/mesos-slave-common
ExecStartPre=/bin/rm -f /var/lib/mesos/slave/meta/slaves/latest
ExecStart=/usr/bin/systemctl restart dcos-mesos-slave
[Install]
WantedBy=multi-user.target
Thank you.
This is very much possible with systemd using After/Before keywords.
you can use something like below
In agentconfigure.service, provide below instruction
After=agentinstall.service
Usually After ensures that the dependent service is launched after launch of given service.
Since you mentioned that agentinstall.service takes 5 minutes to complete, so you have to add Type=notify in agentinstall.service, and from your application do a sd_notify() when your processing is done.
Based on this systemd shall start next service i.e agentconfigure.service
Read more about same here
Read about sd_notify() here

Systemd Hdfs Service [hadoop] - startup

I have created a service that starts and stops my hdfs that is associated to my spark cluster.
the service :
[Unit]
Description=Hdfs service
[Service]
Type=simple
WorkingDirectory=/home/hduser
ExecStart=/opt/hadoop-2.6.4/sbin/start-service-hdfs.sh
ExecStop=/opt/hadoop-2.6.4/sbin/stop-service-hdfs.sh
[Install]
WantedBy=multi-user.target
The problem is when i start the service, it starts and stops just after been started !! :)
I think the problem is the type of the service, I don't really know what type to choose ...
Thank you.
Best regards
Threre are some issues in your config, that is why it is not working.
I'm running hadoop 2.7.3, hive 2.1.1, ubuntu 16.04 under hadoop user
HADOOP_HOME is /home/hadoop/envs/dwh/hadoop/
[Unit]
Description=Hadoop DFS namenode and datanode
After=syslog.target network.target remote-fs.target nss-lookup.target network-online.target
Requires=network-online.target
[Service]
User=hadoop
Group=hadoop
Type=forking
ExecStart=/home/hadoop/envs/dwh/hadoop/sbin/start-dfs.sh
ExecStop=/home/hadoop/envs/dwh/hadoop/sbin/stop-dfs.sh
WorkingDirectory=/home/hadoop/envs/dwh
Environment=JAVA_HOME=/usr/lib/jvm/java-8-oracle
Environment=HADOOP_HOME=/home/hadoop/envs/dwh/hadoop
TimeoutStartSec=2min
Restart=on-failure
PIDFile=/tmp/hadoop-hadoop-namenode.pid
[Install]
WantedBy=multi-user.target
Checklist:
user and user group is set
service type is fork
pid file is set, and this is actual pid that start-dfs.sh creates
environment variables are correct
an alternative stoppable oneshot service which contains hdfs and yarn altogether
[Unit]
Description=Hadoop DFS namenode and datanode & yarn service
After=syslog.target network-online.target
[Service]
User=hadoop
Group=hadoop
Type=oneshot
ExecStartPre=/home/hadoop/hadoop-2.10.1/sbin/start-dfs.sh
ExecStart=/home/hadoop/hadoop-2.10.1/sbin/start-yarn.sh
ExecStop=/home/hadoop/hadoop-2.10.1/sbin/stop-dfs.sh
ExecStopPost=/home/hadoop/hadoop-2.10.1/sbin/stop-yarn.sh
WorkingDirectory=/home/hadoop
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

Resources