I write playbook for download and install and unarchive tar file:
- name: Install DB
remote_user: ldb
hosts: db
tasks:
- name: Create download directory
file:
path: /home/ldb/servicebroker
state: directory
- name: download DB and service_broker
get_url:
url: "http://192.168.1.133:12345/stage/{{ item }}"
dest: /home/ldb/servicebroker
mode: 0755
timeout: 30
with_items:
- linkoopdb/4.1.0/zettabase-4.1.0-rc6.x86_64.iso
- service_broker/4.1.0/servicebroker-4.1.0-rc6.x86_64.tar.gz
- name: unzip tar file
unarchive:
src: /home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz
dest: /home/ldb/servicebroker/
- name: Start master
shell: "/home/ldb/servicebroker/brokerServer --master_ip 192.168.14.94 --master_port 7777"
- name: Start slave
shell: "/home/ldb/servicebroker/brokerServer brokerServer --master_ip 192.168.14.94 --master_port 7777 --slave_ip {{ item }} --slave_port 7777 join"
with_items:
- 192.168.14.95
- 192.168.14.94
- 192.168.14.96
- 192.168.14.97
- 192.168.14.37
- 192.168.14.38
- 192.168.14.39
- name: Check for servicebroker command
shell: /home/ldb/servicebroker/bcli show service_broker,
register: command_output
- name: Start create repository and upload install DB
shell: "{{ item }}"
retries: 3
delay: 10
register: command_output
with_items:
- /home/ldb/servicebroker/bcli create repository db1
- /home/ldb/servicebroker/bcli show repository all
- /home/ldb/servicebroker/bcli upload zettabase-4.1.0-rc6.x86_64.iso
- name: Print result
debug:
var: command_output.stdout_lines
But run failed and get error info:
fatal: [192.168.14.96]: FAILED! => {"changed": false, "msg": "Could not find or access '/home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
I found db file owner is root user, not ldb user:
-rwxr-xr-x 1 root root 2162550784 Apr 19 19:36 base-4.1.0-rc6.x86_64.iso
-rwxr-xr-x 1 root root 3918489072 Apr 19 19:47 base-4.1.0-rc6.x86_64.tar.gz
-rwxr-xr-x 1 root root 31523406 Apr 19 20:03 servicebroker-4.1.0-rc6.x86_64.tar.gz
remote_user: ldb not enable.Please help check! Thanks!
I think you just need to look at the error more closely:
"Could not find or access '/home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz' *on the Ansible Controller*.
You are downloading the file remotely, but trying to access it locally on the controller. In unarchive where you say:
src: /home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz
Try:
src: /home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz
remote_src: yes
Note the documentation for unarchive here:
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/unarchive_module.html
Also note: the error is saying the ldb user doesn't have a shell assigned. Ansible can't login as it. Assuming it is the service account to run a service, you probably don't want to make it an interactive user just for Ansible's sake.
Related
I'm very new to Ansible and trying to figure things out. I have a simple playbook to run on a remote host. To simplify drastically:
- hosts: all
name: build render VM
tasks:
- copy:
src: ./project_{{ project_id }}.yaml
dest: /app/project.yaml
owner: root
I would like to have project_id set to the output of this command, run on localhost: gcloud config get-value project. Ideally I'd like that to be stored into a variable or fact that can be used throughout the playbook. I know I can pass project_id=$(...) on the ansible cmd line, but I'd rather have it set up automatically in the playbook.
Taking for granted the given command only returns the id and nothing else.
With a task delegated to localhost:
- hosts: all
name: build render VM
tasks:
- name: get project id
command: gcloud config get-value project
register: gcloud_cmd
run_once: true
delegate_to: localhost
- name: set project id
set_fact:
project_id: "{{ gcloud_cmd.stdout }}"
- copy:
src: ./project_{{ project_id }}.yaml
dest: /app/project.yaml
owner: root
With a pipe lookup:
- hosts: all
name: build render VM
tasks:
- name: set project id from localhost command
set_fact:
project_id: "{{ lookup('pipe', 'gcloud config get-value project') }}"
run_once: true
- copy:
src: ./project_{{ project_id }}.yaml
dest: /app/project.yaml
owner: root
In Ansible (RHEL 8 if it matters), I need to create a temporary file from a template with sensitive content. After a few other tasks are completed, it should be deleted. The temporary file is an answerfile for an installer that will run as a command. The installer needs a user name and password.
I can't figure out if there is a way to do this easily in Ansible.
The brute-force implementation of what I'm looking for would look similar to this:
- name: Create answer file
template:
src: answerfile.xml.j2
dest: /somewhere/answer.xml
owner: root
group: root
mode: '0600'
- name: Install
command: /somewhere/myinstaller --answerfile /somewhere/answer.xml
creates: /somewhereelse/installedprogram
- name: Delete answerfile
file:
path: /somewhere/answer.xml
state: absent
Of course, this code is not idempotent - the answer file would get created and destroyed on each run.
Is there a better way to do this?
Test the existence of the file. If it exists skip the block. For example
- stat:
path: /somewhereelse/installedprogram
register: st
- block:
- name: Create answer file
template:
src: answerfile.xml.j2
dest: /somewhere/answer.xml
owner: root
group: root
mode: '0600'
- name: Install
command: /somewhere/myinstaller --answerfile /somewhere/answer.xml
- name: Delete answerfile
file:
path: /somewhere/answer.xml
state: absent
when: not st.stat.exists
(not tested)
Taking the task "Delete answerfile" out of the block will make the code more secure. It will always make sure the credentials are not stored in the file. The task won't fail if the file is not present.
- stat:
path: /somewhereelse/installedprogram
register: st
- block:
- name: Create answer file
template:
src: answerfile.xml.j2
dest: /somewhere/answer.xml
owner: root
group: root
mode: '0600'
- name: Install
command: /somewhere/myinstaller --answerfile /somewhere/answer.xml
when: not st.stat.exists
- name: Delete answerfile
file:
path: /somewhere/answer.xml
state: absent
(not tested)
I think that this solution potentially represents a slight improvement on your solution.
With this solution the tasks which create and delete the answerfile will be skipped (rather than always run and reporting changed) if the program you're targeting is already installed.
I still don't love this solution as I don't really like skips.
# Try call the installedprogram. --version is arbitrary here.
# --help, or a simple `which installedprogram` could be alternatives.
- name: Try run installedprogram
command: '/somewhereelse/installedprogram --version'
register: installedprogram_exists
ignore_errors: yes
changed_when: False
# Only create answer file if installedprogram is not installed
- name: Create answer file
template:
src: answerfile.xml.j2
dest: /somewhere/answer.xml
owner: root
group: root
mode: '0600'
when: installedprogram_exists.rc != 0
- name: Install
command: /somewhere/myinstaller --answerfile /somewhere/answer.xml
creates: /somewhereelse/installedprogram
# Only delete answer file if installedprogram is not installed
- name: Delete answerfile
file:
path: /somewhere/answer.xml
state: absent
when: installedprogram_exists.rc != 0
I am trying to create the user account using ansible on Ubuntu20.04. But getting error:
msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
But same playbook is working fine for Ubuntu 18.04.
Below is my playbook:
- hosts: abc
remote_user: root
become: true
tasks:
- name: create user account admin with password xyz
user:
name: admin
group: admin
shell: /bin/bash
password: $6$pLkiHBvZOf9/zctp1SlLXC2PsTFfwwcwmE73wuwwXb2g8.
append: yes
- name: ceating .ssh directory for account admin
file:
path: /home/admin/.ssh
state: directory
group: admin
owner: admin
mode: 0755
- name: copy authorized_keys file from root
copy:
src: /root/.ssh/authorized_keys
dest: /home/admin/.ssh
remote_src: yes
group: admin
owner: admin
- name: change the ssh port
lineinfile:
path: /etc/ssh/sshd_config
state: present
insertafter: '#Port 22'
line: "Port 811"
backup: yes
- name: disable the root login
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin yes'
line: 'PermitRootLogin no'
- name: Restart ssh
service: name=ssh state=restarted
Can you please help me what is the error cause?
Thank you
You can usually get more information from ansible by capturing the error and emitting it:
- name: create user account admin with password xyz
user:
name: admin
group: admin
shell: /bin/bash
password: $6$pLkiHBvZOf9/zctp1SlLXC2PsTFfwwcwmE73wuwwXb2g8.
append: yes
ignore_errors: yes
register: kaboom
- debug: var=kaboom
- fail: msg=yup
and you will get the most information by also running ansible with env ANSIBLE_DEBUG=1 ansible-playbook -vvvv although often times the extra verbosity still isn't enough to get it to surface the actual exception text, so try that register: trick first
I have an ansible playbook which will copy a file into a location on a remote server. It works fine. In this case, the file is an rpm. This is the way it works:
---
- hosts: my_host
tasks:
- name: mkdir /tmp/RPMS
file: path=/tmp/RPMS state=directory
- name: copy RPMs to /tmp/RPMS
copy:
src: "{{ item }}"
dest: /tmp/RPMS
mode: 0755
with_items:
[any_rpm-x86_64.rpm]
register: rpms_copied
Now, with the file successfully on the remote server, I need to start some new logic that will install the rpm that sits in /tmp/RPMS. I have run many different versions of the below (So this code is added onto the above block):
- name: install rpm from file
yum:
name: /tmp/RPMS/any_rpm-x86_64.rpm
state: present
become: true
I don't know if the formatting is incorrect, or if this is not the way. Can anyone advise as to how I can get the rpm in the directory /tmp/RPMS installed using a new few lines in the existing playbook?
Thanks.
I did not find this anywhere else, and it genuinely took me all of my working day to get to this point. For anyone else struggling:
- name: Install my package from a file on server
shell: rpm -ivh /tmp/RPMS/*.rpm
async: 1800
poll: 0
become_method: sudo
become: yes
become_user: root
I have an Ansible (2.1.1.) inventory:
build_machine ansible_host=localhost ansible_connection=local
staging_machine ansible_host=my.staging.host ansible_user=stager
I'm using SSH without ControlMaster.
I have a playbook that has a synchronize command:
- name: Copy build to staging
hosts: staging_machine
tasks:
- synchronize: src=... dest=...
delegate_to: staging_machine
remote_user: stager
The command prompts for password of the wrong user:
local-mac-user#my-staging-host's password:
So instead of using ansible_user defined in the inventory or remote_user defined in task to connect to target (hosts specified in play), it uses the user that we connected to delegate-to box as, to connect to target hosts.
What am I doing wrong? How do I fix this?
EDIT: It works in 2.0.2, doesn't work in 2.1.x
The remote_user setting is used at the playbook level to set a particular play run as a user.
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
If you only have a certain task that needs to be run as a different user you can use the become and become_user settings.
- name: Run command
command: whoami
become: yes
become_user: some_user
Finally if you have a group of tasks to run as a user in a play you can group them with block
example:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
Reference:
- How to switch a user per task or set of tasks?
- https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
The one which works for me but please note that it is for Windows and Linux do not require become_method: runas and basically does not have it
- name: restart IIS services
win_service:
name: '{{ item }}'
state: restarted
start_mode: auto
force_dependent_services: true
loop:
- 'SMTPSVC'
- 'IISADMIN'
become: yes
become_method: runas
become_user: '{{ webserver_user }}'
vars:
ansible_become_password: '{{ webserver_password }}'
delegate_facts: true
delegate_to: '{{ groups["webserver"][0] }}'
when: dev_env
Try set become: yes and become_user: stager on your YAML file... That should fix it...
https://docs.ansible.com/ansible/2.5/user_guide/become.html