I would like to simply reboot the VCSA using Ansible as part of a development workflow. Does anybody have any ideas as to how to do so?
https://kb.vmware.com/s/article/2147152
This would normally be done by hitting 'shell' and then turning off/on the service and/or 'reboot'
I've been playing around with ansible.raw, but it seems to hang indefinitely.
A few of the attempts I have tried:
tasks:
- name: 'get into the shell'
raw: 'shell'
register: shell
- debug: msg="{{ shell }}"
- name: 'reboot'
raw: 'reboot'
register: reboot
- debug: msg="{{ reboot }}"
- name: Unconditionally reboot the machine with all defaults
reboot:
I resolved this by manually going into every system and changing the default shell to /bin/bash
https://communities.vmware.com/t5/vCenter-Server-Discussions/Access-to-VCSA-through-SSH-and-or-local-console-is-NOT-working/td-p/1816293
############
~# vi /etc/passwd
and change the appliancesh to /bin/bash
root:x:0:0:root:/root:/bin/bash
Also make sure that ssh access for root is enabled from /etc/ssh/sshd_config
Related
I have linux box, where there is a user "user1", using C shell. The user has .cshrc in its home directory with some useful environment settings.
when I use this ansible logic though, the environment is not set properly for the user.
---
- name: some playbook
hosts: remote_host
become: yes
become_user: root
become_method: sudo
tasks:
- name: Check environment variables for user1
become: yes
become_user: user1
become_method: sudo
shell: "env"
register: envresult
- name: debug env
debug:
var: envresult.stdout
In the output I can see that the variables set in .cshrc are not in the environment. How can I force ansible to process the login scripts of users upon become?
Thank you!
You should add become_flags: "-i" to your playbook. The task then looks like
- name: Check environment variables for user1
become: yes
become_user: user1
become_method: sudo
become_flags: "-i"
shell: "env"
register: envresult
More Information is available at the Ansible documentation.
Before running a patching playbook, I ran the playbook with the "--check" option as a dry run. However, one of the plays within the playbook to check whether the group needs a reboot, doesn't register the "reboot_hint" variable as intended
- name: check for reboot
shell: needs-restarting -r
register: reboot_hint
failed_when: reboot_hint.rc > 1
- name: debug, show the reboot hint variable
debug:
var:
- reboot_hint.rc
- reboot_hint
I get the message: ""VARIABLE IS NOT DEFINED!"" for the run.
What could be causing this? I am expect a return value of "1" or "0". I do get that when I go into the command line, run the "needs-restarting -r" and "echo $?
No core libraries or services have been updated.
Reboot is probably not necessary.
> echo $?
0
You're running --check and thus the shell command doesn't run. Because the shell command doesn't run there's nothing to register.
You can read more about this in https://docs.ansible.com/ansible/latest/user_guide/playbooks_checkmode.html#enabling-or-disabling-check-mode-for-tasks
A simple fix is adding check_mode: no, e.g.
- name: check for reboot
check_mode: no
shell: needs-restarting -r
register: reboot_hint
failed_when: reboot_hint.rc > 1
This forces the task to run even when check-mode is enabled.
I want to copy a script to a remote server and then execute it. I can copy it to the directory /home/user/scripts, but when I run ansible script, it returns Could not find or access '/home/user/scripts/servicios.sh'
The full error output is:
fatal: [192.168.1.142]: FAILED! => {"changed": false, "msg": "Could not find or access '/home/user/scripts/servicios.sh'"}
Here is the ansible playbook
- name: correr script
hosts: all
tasks:
- name: crear carpeta de scripts
file:
path: /home/user/scripts
state: directory
- name: copiar el script
copy:
src: /home/local/servicios.sh
dest: /home/user/scripts/servicios.sh
- name: ejecutar script como sudo
become: yes
script: /home/user/scripts/servicios.sh
You don’t need to create a directory and copy the script to target (remote node), the script module does that for you. It takes the script name followed by a list of space-delimited arguments. The local script at path will be transferred to the remote node and then executed. The script will be processed through the shell environment on the remote node. You were getting the error because script module expects the path /home/user/scripts/servicios.sh on your Ansible controller (the node where you are running the playbook from). To make it work you can specify correct path (/home/local/servicios.sh) in script task instead of /home/user/scripts/servicios.sh which is the path on the remote node. So you can change the playbook like this: You can also register the result of that command as a variable if you would like to see that.
---
- name: correr script
hosts: all
become: yes
tasks:
- name: ejecutar script como sudo
script: /home/local/servicios.sh
register: console
- debug: msg="{{ console.stdout }}"
- debug: msg="{{ console.stderr }}"
What if don’t want to go for script module and you are interested in creating a directory and copy the script to target (remote node) explicitly, and run it? No worries, you can still use the command module like this:
---
- name: correr script
hosts: all
become: yes
tasks:
- name: crear carpeta de scripts
file:
path: /home/user/scripts
state: directory
- name: copiar el script
copy:
src: /home/local/servicios.sh
dest: /home/user/scripts/servicios.sh
- name: ejecutar script como sudo
command: bash /home/user/scripts/servicios.sh
register: console
- debug: msg="{{ console.stdout }}"
- debug: msg="{{ console.stderr }}"
But I strongly recommend to go for script module.
The script tag itself transfers the script from the local machine to remote machine and executes it there.
So, the path specified in the script module is of the local machine and not the remote machine i.e., /home/local/servicios.sh instead of /home/user/scripts/servicios.sh
As you have specified the path which is supposed to be on the remote machine, ansible is unable to find that script on local machine at the given path which results in the given error.
Hence, update the path in the task to local path as shown below,
- name: ejecutar script como sudo
become: yes
script: /home/local/servicios.sh
So scripts cant be executed inside the remote server and should be
executed via local machine to the remote?
#thrash3d No, it is not like that. When you use script tag the script is transferred to the remote machine and then it is executed there. If there is a script which you don't want to put on your remote machine and just want to execute it then you can use script tag.
If you want that script on your remote machine then you can first copy your script on remote machine and then execute it there.
Both ways are correct and it is up to you which case suits you better.
Im trying to run a shell script on the host machine after copying it over there using ansible. The script has 777 permissions.
Please read the below question as it gives the full scope of the actual issue that we are trying to deal with
Set different ORACLE_HOME and PATH environment variable using Ansible
- name: Run the Script [List]
shell: "/tmp/sqlscript/sql_select.sh {{item}} >> /tmp/sqlscript/output.out"
become: yes
become_method: sudo
become_user: oracle
register: orh
with_items: "{{ factor_dbs.split('\n') }}"
Below is the shell script
#!/bin/bash
source $HOME/bin/gsd_xenv $1 &> /dev/null
sqlplus -s <<EOF
/ as sysdba
set heading off
select d.name||','||i.instance_name||','||i.host_name||';' from v\$database d,v\$instance i;
EOF
Despite escalating the privileges, I observed that the task is not executing unless I add environment variables like below
- name: Run the script [List]
shell: "/tmp/sqlscript/oracle_home.sh {{item}} >> /tmp/sqlscript/orahome.out"
become: yes
become_method: sudo
become_user: oracle
environment:
PATH: "/home/oracle/bin:/usr/orasys/12.1.0.2r10/bin:/usr/bin:/bin:/usr/ucb:/sbin:/usr/sbin:/etc:/usr/local/bin:/oradata/epdmat/goldengate/config/sys"
ORACLE_HOME: "/usr/orasys/12.1.0.2r10"
register: orh
with_items: "{{ factor_dbs.split('\n') }}"
However this playbook needs to be run across different hosts which have different path and oracle_home variables.
My question is, why doest the task run despite escalating the permissions. When I try to run the same script manually by logging into the server and after doing "sudo su oracle", it seems to be running fine.
It depends on where you actually set your environment variables. There is a difference in executing a script when you are logged in at a remote machine, and running a script over ssh as Ansible does (see e.g., Differentiate Interactive login and non-interactive non-login shell). Depending on the type of shell and your system, different bash profiles are loaded.
I'm trying to follow this solution to add use the shell module and ssh-keyscan to add a key to my known_hosts file of a newly created EC2 instance.
After trying to do this multiple ways as listed on that question I eventually ran just the ssh-keyscan command using the shell module without the append. I am getting no output from this task:
- name: accept new ssh fingerprints
shell: ssh-keyscan -H {{ item.public_ip }}
args:
executable: /bin/bash
with_items: "{{ ec2.instances }}"
register: keyscan
- debug: var=keyscan
Debug here shows nothing in stdout and stdout_lines and nothing in stderr and stderr_lines
Note: I tried running this with the bash as the executable shown after reading that the shell module defaults to /bin/sh which is the dash shell on my Linux Mint VirtualBox. But it's the same regardless.
I have tested the shell command with the following task and I see the proper output in stdout and stdout_lines:
- name: test the shell
shell: echo hello
args:
executable: /bin/bash
register: hello
- debug: var=hello
What is going on here? Running ssh-keyscan in a terminal (not through Ansible) works as expected.
EDIT: Looking at the raw_params output from debug shows ssh-keyscan -H x.x.x.x and copying and pasting this into the terminal works as expected.
The answer is that it doesn't work the first time. While researching another method I stumbled across the retries keyword in ansible that allows a retry of whatever command. I tried this and on attempt number 2 in the retry loop it is working.