How to check the service status under other user managed by systemd? - systemd

For example, i have two users: user1 and user2. Under user1 we run a service1 service managed by systemd. Under user2 we run a service2 service managed by systemd.
Currently, we only allow user1 to login a machine, How do i run systemd status or systemd is-active to tell whether service2 is running or not.

Use:
sudo systemctl status service2
If necessary, add the appropriate entry to /etc/sudoers to limit user1 to only using sudo to call this specific command.

Related

Auto boot MailHog on Ubuntu 20.04

I installed MailHog in my staging environment by following these steps:
sudo apt-get -y install golang-go
go get github.com/mailhog/MailHog
In order to manually start the service I do:
cd ~/go/bin
./MailHog
Since I'm using Laravel I already have supervisor running for workers.
I'm wondering if there is a way to add a new .conf file in order to start MailHog.
I tried to follow how Laravel workers are started, but so far no luck
[program:mailhog]
process_name=%(program_name)s_%(process_num)02d
command=~/go/bin/MailHog
user=ubuntu
stdout_logfile=/var/www/api/storage/logs/mailhog.log
I get mailhog:mailhog_00: ERROR (no such file) when I try to start supervisor.
I need a way to auto boot MailHog, no matter if I need supervisor or via services.
I'll really appreciate it if you can provide the "recipe" for starting MailHog from the supervisor or by using a service.
I figure out how the complete installation/setup should be:
Downloading & installation
sudo apt-get -y install golang-go
go get github.com/mailhog/MailHog
Copy Mailhog to bin directory
sudo cp ~/go/bin/MailHog /usr/local/bin/MailHog
Create MailHog service
sudo tee /etc/systemd/system/mailhog.service <<EOL
[Unit]
Description=MailHog
After=network.target
[Service]
User=ubuntu
ExecStart=/usr/bin/env /usr/local/bin/MailHog > /dev/null 2>&1 &
[Install]
WantedBy=multi-user.target
EOL
Note: Change the User=ubuntu to your username.
Check status service is loaded successfully.
sudo systemctl status mailhog
Output
mailhog.service - MailHog
Loaded: loaded (/etc/systemd/system/mailhog.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Start service
sudo systemctl enable mailhog
Reboot system and visit http://yourdomain.com:8025/
You don't need a supervisor, you can use Linux systemd to create a startup application.
systemd is the standard system and service manager in modern Linux. It is responsible for executing and managing programs during Linux startup
before that add mailhog to your system path variable to call it only by name
export PATH=$PATH:/home/YOUR-USERNAME/go/bin/MailHog
sudo systemctl enable mailhog
or if you are using any desktop environment , you can follow this
https://askubuntu.com/questions/48321/how-do-i-start-applications-automatically-on-login

Ansible having problem authenticating with Google Cloud Platform

We are using Ansible to deploy an image to Google Kubernetes Cluster (GKE).
We have setup Ubuntu 20.04 and Python 3.8.5.
playbook.main.yml:
---
- hosts: localhost
vars:
k8s_file_path: /home/pesinn/Documents/...
become: yes
become_method: sudo
roles:
- k8s
main.yml:
- name: First Deployment
k8s:
kubeconfig: /home/pesinn/.kube/config
src: "{{k8s_file_path}}/deployment.yml"
When trying to deploy the image defined in deployment.yml file, by running the playbook, we get this error:
kubernetes.config.config_exception.ConfigException: cmd-path: process returned 1
Cmd: /home/pesinn/y/google-cloud-sdk/bin/gcloud config config-helper --format=json
Stderr: WARNING: Could not open the configuration file: [/root/.config/gcloud/configurations/config_default].
ERROR: (gcloud.config.config-helper) You do not currently have an active account selected.
Please run:
$ gcloud auth login
What we've already done
Initialized the cloud: gcloud init
Logged in and chosen a project gcloud auth login
Run export GOOGLE_APPLICATION_CREDENTIALS="path_to_service_account_key.json"
Run gcloud container clusters get-credentials {gke_project} --region {region}
Run the playbook sudo ansible-playbook playbook.main.yml -vvv
Run gcloud config config-helper --format=json on the local machine without any problems
What is very strange here is that we're logged in for sure. We can access the GKE cluster through kubectl command on the local machine. However, Ansible complains about us not being logged in. Also, in the error logs, we see that it is trying to open /root/.config/gcloud/configurations/config_default. Our default config file is, on the other hand, located in the home folder.
This error occurs randomly. Sometimes Ansible can detect our login and deploys the image, but sometimes it gives us this error. Both scenarios can happens without any code changes being made.
For some reason, ansible does not use GCP's default environment variables for authentication.
You can set
GCP_AUTH_KIND
GCP_SERVICE_ACCOUNT_EMAIL
GCP_SERVICE_ACCOUNT_FILE
GCP_SCOPES
GCP_SERVICE_ACCOUNT_FILE is the equivalent of GOOGLE_APPLICATION_CREDENTIALS
Reference: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html#providing-credentials-as-environment-variables

Using ping on localhost in a playbook

I am unable to run ping commands from a ansible host (using localhost, see below).
I built a simple playbook to run ping using the command module:
---
#
- name: GET INFO
hosts: localhost
tasks:
- name: return motd to registered var
command: "/usr/bin/ping 10.39.120.129"
register: mymotd
- name: debug output
debug: var=mymotd
However, I this error: "ping: socket: Operation not permitted"
Seems like there is a permissions issue. However, looking at the /usr/bin directory, it looks like ping would be executable to me:
"-rwxr-xr-x. 1 root root 66176 Aug 4 2017 ping",
I cannot become or use sudo, it seems like tower is locked down for that and I don't have the authority to change it either.
Anyone have any suggestions? What brought me to this, is that I am trying to run ping in a custom module and getting a similar issue.
Thanks
ping binary needs to have the SETUID bit set to be fully runable as a normal user, which is not the case on your server.
You need to run as root:
chmod u+s $(which ping)
If you don't have root access and cannot have this done by an admin, I'm affraid you're stuck... unless the server you are trying to ping is a machine you can manage with ansible.
In this later case, there is a ping module you can use. It is not ICMP ping as said in the doc. See if this can be used in your situation.
One of the numerous ref I could find about ping permissions: https://ubuntuforums.org/showthread.php?t=927709

Ansible module: should I configure using systemd or service?

I want to configure the following linux command using Ansible:
sudo systemctl enable XXX.service
should I use:
systemd:
name: XXX.service
enabled: yes
or
service:
name: XXX.service
enabled: yes
And what are the difference among using systemctl, systemd and service?
Referring to ansible systemd module and ansible service module I think you should use the systemd module.
It's designed to control systemd. systemd is designed to replace service, so you can see xxx.service in systemctl command, but it's systemd system, it's different than service.
use "service", it will systemd's systemctl command anyway.

How to create new system service by ansible-playbook

I have created a script to start/stop my application. Now I want to add it as a centos system service. First I created a task to create a link from my script to /etc/init.d/service_name as below.
---
- name: create startup link
file: src={{ cooltoo_service_script }} dest={{ cooltoo_service_init }} state=link
After create the service, I want to add it to system service. The command used to do that is "chkconfig --add service_name". I wonder whether there is a ansible module to do that instead of hardcoded the command in ansible-playbook file. I have looked at this page http://docs.ansible.com/ansible/service_module.html but it only shows how to manage a service not create a new one.
The below code snippet will create Service in CentOS 7.
Code
Tasks
/tasks/main.yml
- name: TeamCity | Create environment file
template: src=teamcity.env.j2 dest=/etc/sysconfig/teamcity
- name: TeamCity | Create Unit file
template: src=teamcity.service.j2 dest=/lib/systemd/system/teamcity.service mode=644
notify:
- reload systemctl
- name: TeamCity | Start teamcity
service: name=teamcity.service state=started enabled=yes
Templates
/templates/teamcity.service.j2
[Unit]
Description=JetBrains TeamCity
Requires=network.target
After=syslog.target network.target
[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/teamcity
ExecStart={{teamcity.installation_path}}/bin/teamcity-server.sh start
ExecStop={{teamcity.installation_path}}/bin/teamcity-server.sh stop
User=teamcity
PIDFile={{teamcity.installation_path}}/teamcity.pid
Environment="TEAMCITY_PID_FILE_PATH={{teamcity.installation_path}}/teamcity.pid"
[Install]
WantedBy=multi-user.target
\templates\teamcity.env.j2
TEAMCITY_DATA_PATH="{{ teamcity.data_path }}"
Handlers
\handlers\main.yml
- name: reload systemctl
command: systemctl daemon-reload
Reference :
Ansible playbook structure: http://docs.ansible.com/ansible/playbooks_intro.html
SystemCtl: https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units
The 'service' module supports an 'enabled' argument.
Here's an example part of a playbook, which I will freely admit does look like a newbie attempt. This assumes RHEL/CentOS 6.x, which uses SysV, not systemd.
- name: install rhel sysv supervisord init script
copy: src=etc/rc.d/init.d/supervisord dest=/etc/rc.d/init.d/supervisord owner=root group=root mode=0755
- name: install rhel sysv supervisord sysconfig
copy: src=etc/sysconfig/supervisord dest=/etc/sysconfig/supervisord owner=root group=root mode=0640
- name: enable sysv supervisord service
service: name=supervisord enabled=yes
- name: start supervisord
service: name=supervisord state=started
IMPORTANT A lot of custom init scripts WILL FAIL with Ansible and SysV init; the reason being that the 'status' option (service supervisord status) needs to a return an LSB-compliant return code. Otherwise, Ansible will not know if a service is up or down, and idempotency will fail (restart will still work because that is unconditional)
Here's part of a script, which I've just rewritten to make use of the 'status' function within /etc/init.d/functions (you'll notice this same pattern in other Red Hat provided init-scripts in /etc/init.d/
status)
/usr/bin/supervisorctl $OPTIONS status
status -p $PIDFILE supervisord
# The 'status' option should return one of the LSB-defined return-codes,
# in particular, return-code 3 should mean that the service is not
# currently running. This is particularly important for Ansible's 'service'
# module, as without this behaviour it won't know if a service is up or down.
RETVAL=$?
;;
Reference: http://refspecs.linuxfoundation.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
If the status action is requested, the init script will return the
following exit status codes.
0 program is running or service is OK 1 program is dead and /var/run
pid file exists 2 program is dead and /var/lock lock file exists
3 program is not running 4 program or service status is unknown
5-99 reserved for future LSB use 100-149 reserved for distribution use
150-199 reserved for application use 200-254 reserved
Indeed the service module only manages already registered services as you have figured out. To my knowledge there is no module to register a service.
Are you aware this step can be skipped with some modifications to your init.d script? If the script follows those rules you can just use the service module to enable/start the service.
For RedHat/CentOS 7 (using systemd/systemctl), the equivalent of chkconfig --add ${SERVICE_NAME} is systemctl daemon-reload [via fedoraproject.org].
Then, using the systemd module of Ansible 2.2 or greater, you may start a service with a preceding systemctl daemon-reload like so [via docs.ansible.com]:
# Example action to restart service cron on centos, in all cases, also issue daemon-reload to pick up config changes
- systemd:
state: restarted
daemon_reload: yes
name: crond
Based on my experience, the daemon_reload parameter can also be used within the generic service module as well, although it isn't documented, and might fail on non-systemd systems:
- service:
state: restarted
daemon_reload: yes
name: crond

Resources