I want to configure the following linux command using Ansible:
sudo systemctl enable XXX.service
should I use:
systemd:
name: XXX.service
enabled: yes
or
service:
name: XXX.service
enabled: yes
And what are the difference among using systemctl, systemd and service?
Referring to ansible systemd module and ansible service module I think you should use the systemd module.
It's designed to control systemd. systemd is designed to replace service, so you can see xxx.service in systemctl command, but it's systemd system, it's different than service.
use "service", it will systemd's systemctl command anyway.
Related
I want to know how to check if I enabled to expose daemon on tcp and if don't how to enable it
On Linux, you need to configure the file: /etc/docker/daemon.json
{
"hosts": [ "unix:///var/run/docker.sock","tcp://0.0.0.0:2376"],
"log-driver": "journald",
"signature-verification": false,
}
On Mac , the path will be something smilar.
I am using TLS though , but if you want to test it or give it a try , then that is the place to do.
But if it doesnt work without TLS then there is no harm in generating self signed certicate and use it.
Update:
Docker for MAC:
$ socat -d TCP-LISTEN:2376,range=127.0.0.1/32,reuseaddr,fork UNIX:/var/run/docker.sock
$ curl localhost:2376/version
{"Version":"1.11.2","ApiVersion":"1.23","GitCommit":"56888bf","GoVersion":"go1.5.4","Os":"linux","Arch":"amd64","KernelVersion":"4.4.12-moby","BuildTime":"2016-06-06T23:57:32.306881674+00:00"}
More Details:
details
On Linux System,
Create daemon.json file in /etc/docker:
{"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]}
Add /etc/systemd/system/docker.service.d/override.conf:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
Reload the system daemon:
systemctl daemon-reload
Restart docker:
systemctl restart docker.service
Reference: https://gist.github.com/styblope/dc55e0ad2a9848f2cc3307d4819d819f
I want to install a systemd service from a Jinja2 template. How do I do this?
Do I have to use copy module to copy the file to /lib/systemd/system and then use systemd module to enable it?
Is there a better way?
I use the template module to install the .service file into the /etc/systemd/system. According to this digital ocean blog post /lib/systemd/system should be reserved for packages bundled with the OS itself, and third party services should be defined in /etc/systemd/system.
With ansible's systemd module I'd start the service with daemon_reload=yes.
Prior to Ansible 2.2: I do a systemctl daemon-reload afterward (can use an ansible handler for this if appropriate) to prod systemd to pick up the new file.
- name: install myservice systemd unit file
template: src=myservice.j2 dest=/etc/systemd/system/myservice.service
- name: start myservice
systemd: state=started name=myservice daemon_reload=yes
# For ansilble < 2.2 only
#- name: reload systemd unit configuration
# command: systemctl daemon-reload
I have created a script to start/stop my application. Now I want to add it as a centos system service. First I created a task to create a link from my script to /etc/init.d/service_name as below.
---
- name: create startup link
file: src={{ cooltoo_service_script }} dest={{ cooltoo_service_init }} state=link
After create the service, I want to add it to system service. The command used to do that is "chkconfig --add service_name". I wonder whether there is a ansible module to do that instead of hardcoded the command in ansible-playbook file. I have looked at this page http://docs.ansible.com/ansible/service_module.html but it only shows how to manage a service not create a new one.
The below code snippet will create Service in CentOS 7.
Code
Tasks
/tasks/main.yml
- name: TeamCity | Create environment file
template: src=teamcity.env.j2 dest=/etc/sysconfig/teamcity
- name: TeamCity | Create Unit file
template: src=teamcity.service.j2 dest=/lib/systemd/system/teamcity.service mode=644
notify:
- reload systemctl
- name: TeamCity | Start teamcity
service: name=teamcity.service state=started enabled=yes
Templates
/templates/teamcity.service.j2
[Unit]
Description=JetBrains TeamCity
Requires=network.target
After=syslog.target network.target
[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/teamcity
ExecStart={{teamcity.installation_path}}/bin/teamcity-server.sh start
ExecStop={{teamcity.installation_path}}/bin/teamcity-server.sh stop
User=teamcity
PIDFile={{teamcity.installation_path}}/teamcity.pid
Environment="TEAMCITY_PID_FILE_PATH={{teamcity.installation_path}}/teamcity.pid"
[Install]
WantedBy=multi-user.target
\templates\teamcity.env.j2
TEAMCITY_DATA_PATH="{{ teamcity.data_path }}"
Handlers
\handlers\main.yml
- name: reload systemctl
command: systemctl daemon-reload
Reference :
Ansible playbook structure: http://docs.ansible.com/ansible/playbooks_intro.html
SystemCtl: https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units
The 'service' module supports an 'enabled' argument.
Here's an example part of a playbook, which I will freely admit does look like a newbie attempt. This assumes RHEL/CentOS 6.x, which uses SysV, not systemd.
- name: install rhel sysv supervisord init script
copy: src=etc/rc.d/init.d/supervisord dest=/etc/rc.d/init.d/supervisord owner=root group=root mode=0755
- name: install rhel sysv supervisord sysconfig
copy: src=etc/sysconfig/supervisord dest=/etc/sysconfig/supervisord owner=root group=root mode=0640
- name: enable sysv supervisord service
service: name=supervisord enabled=yes
- name: start supervisord
service: name=supervisord state=started
IMPORTANT A lot of custom init scripts WILL FAIL with Ansible and SysV init; the reason being that the 'status' option (service supervisord status) needs to a return an LSB-compliant return code. Otherwise, Ansible will not know if a service is up or down, and idempotency will fail (restart will still work because that is unconditional)
Here's part of a script, which I've just rewritten to make use of the 'status' function within /etc/init.d/functions (you'll notice this same pattern in other Red Hat provided init-scripts in /etc/init.d/
status)
/usr/bin/supervisorctl $OPTIONS status
status -p $PIDFILE supervisord
# The 'status' option should return one of the LSB-defined return-codes,
# in particular, return-code 3 should mean that the service is not
# currently running. This is particularly important for Ansible's 'service'
# module, as without this behaviour it won't know if a service is up or down.
RETVAL=$?
;;
Reference: http://refspecs.linuxfoundation.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
If the status action is requested, the init script will return the
following exit status codes.
0 program is running or service is OK 1 program is dead and /var/run
pid file exists 2 program is dead and /var/lock lock file exists
3 program is not running 4 program or service status is unknown
5-99 reserved for future LSB use 100-149 reserved for distribution use
150-199 reserved for application use 200-254 reserved
Indeed the service module only manages already registered services as you have figured out. To my knowledge there is no module to register a service.
Are you aware this step can be skipped with some modifications to your init.d script? If the script follows those rules you can just use the service module to enable/start the service.
For RedHat/CentOS 7 (using systemd/systemctl), the equivalent of chkconfig --add ${SERVICE_NAME} is systemctl daemon-reload [via fedoraproject.org].
Then, using the systemd module of Ansible 2.2 or greater, you may start a service with a preceding systemctl daemon-reload like so [via docs.ansible.com]:
# Example action to restart service cron on centos, in all cases, also issue daemon-reload to pick up config changes
- systemd:
state: restarted
daemon_reload: yes
name: crond
Based on my experience, the daemon_reload parameter can also be used within the generic service module as well, although it isn't documented, and might fail on non-systemd systems:
- service:
state: restarted
daemon_reload: yes
name: crond
I'm running RethinkDB on Amazon Linux AMI. I already have services running on 8080 so I need to change the port for the WebUI interface. How would I do that?
I happen to find the documentation here https://www.rethinkdb.com/docs/cli-options/
$ rethinkdb --bind all --http-port 9090
Karthick here has the right answer if you are running your instance of RethinkDB from the command line and daemonizing it.
In case you are running the default system configuration of RethinkDB after say sudo apt-get install rethinkdb and want to change it there you have to change the configuration file by following these steps:
You want to look under the directory /etc/rethinkdb and find the RethinkDB configuration file and change the http-port value to the new port you'd like it to be on.
Then if your system uses init.d you should be able to restart with sudo service rethinkdb restart. If your system usessystemdthen you'll do something like thissudo systemctl [restart|stop/start] rethinkdb`
These two links will be a good resource to you in this case:
https://www.rethinkdb.com/docs/config-file/
https://www.rethinkdb.com/docs/start-on-startup/
I have a set of web server processes that I wish to restart one at a time. I want to wait for process N to be ready to service HTTP requests before restarting process N+1
The following works:
- name: restart server 9990
supervisorctl: name='server_9990' state=restarted
- wait_for: port=9990 delay=1
- name: restart server 9991
supervisorctl: name='server_9991' state=restarted
- wait_for: port=9991 delay=1
etc.
But I'd really like to do this in a loop. It seems that Ansible doesn't allow multiple tasks inside a loop (in this case, I need two tasks: supervisorctl and wait_for)
Am I missing a way to do this or is replicating these tasks for each instance of the server really the way to go?
I believe that's not possible with Ansible default functionality. I think your best bet would be to create your own module. I had seen that modules can call other modules, so you might be able to have a slim module which simply calls the supervisorctl module and then waits for the port to be ready. This module then could be called with with_items.
Another idea is to not use supervisorctl in the first place. You could run a shell or script task which does then manually call supervisorctl and waits for the port to open.