How to restart minikube service using systemd - systemd

I have created below minikube service using systemd on my ubuntu machine.
[Unit]
Description=minikube
After=network-online.target firewalld.service containerd.service docker.service
Wants=network-online.target docker.service
Requires=docker.socket containerd.service docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/kalpesh
ExecStart=/usr/local/bin/minikube start
ExecStop=/usr/local/bin/minikube stop
User=kalpesh
Group=kalpesh
[Install]
WantedBy=multi-user.target
I would like to restart this service when minikube stops and shows below status.
kalpesh#kalpesh:~$> minikube status
minikube
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped
The above systemd service still thinks that minikube is running however it has stopped internally. I would like to grep 'Stopped'.

Does minikune API work?
If so you can just use minikube start which restarts the node.

Related

set Requisite in systemd service,but it can not work

ubuntu:20.04
flask_app.service
[Unit]
Description=python web service(using gunicorn WSGI)
After=syslog.target network.target
Requisite=mysql.service
[Service]
Type=simple
PIDFile=/data/project/xhxf/log/flask_app.pid
WorkingDirectory=/data/project/xhxf/
ExecStart=/usr/bin/gunicorn -b localhost:7071 app:app
[Install]
WantedBy=multi-user.target
status of flask_app.service
status of mysql.service
when I start flask_app.service and I don't start mysql.service,mysql.service is still inactive, but flask_app.service is active.
if I also add After=mysql.service in the unit module,then I start flask_app.service and I don't start mysql.service,both mysql.service and flask_app.service are inactive. Why? Requisite= Can't it be used alone?Does it have to be used with After= to make it work
I add After=mysql.service in the unit module,then I start flask_app.service and I don't start mysql.service,both mysql.service and flask_app.service are inactive

Custom port in Systemd scrip

I'm trying to set a custom port in my Systemd script, here is my .service file:
[Unit]
Description=discord-stock-ticker
Wants=basic.target
After=basic.target network.target
Before=sshd.service
[Service]
SyslogIdentifier=discord-stock-ticker
StandardOutput=syslog
StandardError=syslog
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/opt/discord-stock-ticker/discord-stock-ticker -port 42069
Restart=always
[Install]
WantedBy=multi-user.target
I've tried a bunch of different option like --PORT=xxx or --server.port=xxx, but it still run 8080 port.
Did you run systemctl daemon-reload after editing the service file? You need to in order to "commit" the changes so to speak.

Run Airflow HA Scheduler as systemd services

I want to run 2 airflow schedulers for which i created a systemd service file ending with #.service.
Now if I try to run the service like
sudo systemctl start airflow-scheduler#{1..2}
Only one of the schedulers manages to run, while the other one runs into an error which says
sqlalchemy.exc.DatabaseError: (mysql.connector.errors.DatabaseError) 3572 (HY000): Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.
My service file looks like this:
[Unit]
Description=Airflow scheduler daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service
[Service]
EnvironmentFile=/etc/sysconfig/airflow
User=myuser
Group=myuser
Type=simple
ExecStart=/usr/local/bin/airflow scheduler
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
The problem was with mysql connector python. I used mysqldb instead in the airflow config file and it works fine now.

systemd service not being triggered by its timer unit

Here is my service unit definition
[Unit]
Description=My Service
[Service]
ExecStart=/bin/bash -lc /usr/local/bin//myservice
# ExecStop=/bin/kill -15 $MAINPID
EnvironmentFile=/etc/myservice/config
User=myuser
Group=mygroup
and its timer unit file
[Unit]
Description=Timer for myservice
[Timer]
Unit=myservice.service
OnCalendar=*-*-* 10:33:00
[Install]
WantedBy=timers.target
I have tentatively set the OnCalendar to *-*-* 10:33:00 (followed by a sudo systemctl daemon-reload) but as a I was watching my machine, I didn't see the service firing. I had also set it for 5AM but this morning I saw no evidence of execution
When I perform a manual sudo systemctl start myservice it works as expected.
What might be preventing the service from executing according to its timer schedule?
You did not start the timer.
sudo systemctl start myservice.timer

Systemd Hdfs Service [hadoop] - startup

I have created a service that starts and stops my hdfs that is associated to my spark cluster.
the service :
[Unit]
Description=Hdfs service
[Service]
Type=simple
WorkingDirectory=/home/hduser
ExecStart=/opt/hadoop-2.6.4/sbin/start-service-hdfs.sh
ExecStop=/opt/hadoop-2.6.4/sbin/stop-service-hdfs.sh
[Install]
WantedBy=multi-user.target
The problem is when i start the service, it starts and stops just after been started !! :)
I think the problem is the type of the service, I don't really know what type to choose ...
Thank you.
Best regards
Threre are some issues in your config, that is why it is not working.
I'm running hadoop 2.7.3, hive 2.1.1, ubuntu 16.04 under hadoop user
HADOOP_HOME is /home/hadoop/envs/dwh/hadoop/
[Unit]
Description=Hadoop DFS namenode and datanode
After=syslog.target network.target remote-fs.target nss-lookup.target network-online.target
Requires=network-online.target
[Service]
User=hadoop
Group=hadoop
Type=forking
ExecStart=/home/hadoop/envs/dwh/hadoop/sbin/start-dfs.sh
ExecStop=/home/hadoop/envs/dwh/hadoop/sbin/stop-dfs.sh
WorkingDirectory=/home/hadoop/envs/dwh
Environment=JAVA_HOME=/usr/lib/jvm/java-8-oracle
Environment=HADOOP_HOME=/home/hadoop/envs/dwh/hadoop
TimeoutStartSec=2min
Restart=on-failure
PIDFile=/tmp/hadoop-hadoop-namenode.pid
[Install]
WantedBy=multi-user.target
Checklist:
user and user group is set
service type is fork
pid file is set, and this is actual pid that start-dfs.sh creates
environment variables are correct
an alternative stoppable oneshot service which contains hdfs and yarn altogether
[Unit]
Description=Hadoop DFS namenode and datanode & yarn service
After=syslog.target network-online.target
[Service]
User=hadoop
Group=hadoop
Type=oneshot
ExecStartPre=/home/hadoop/hadoop-2.10.1/sbin/start-dfs.sh
ExecStart=/home/hadoop/hadoop-2.10.1/sbin/start-yarn.sh
ExecStop=/home/hadoop/hadoop-2.10.1/sbin/stop-dfs.sh
ExecStopPost=/home/hadoop/hadoop-2.10.1/sbin/stop-yarn.sh
WorkingDirectory=/home/hadoop
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

Resources