I have read the documentation related expect module on here
I'm trying to add a CentOS7 to 2012 AD Domain controller, here is my playbook,
- name: Attempt to join the server to AS
expect:
command: realm join --user=admin#mydomain.local mydomain.local
responses:
(?i)Password for admin#mydomain.local: abc123
Ansible playbook fails, saying the password is incorrect, is this the correct way of using expect?
Have you try to incapsulate the password in quotes like so?
(?i)Password for admin#mydomain.local: "abc123"
Related
I need to login into 50 hosts and perform a specific task.
Each host has one of 2 passwords (ex: pass1 and pass2) for a specific user (ex: foo).
I do not know on which host "foo" is set with "pass1" and on which host "foo" is set with "pass2". I have both passwords in a vault file.
Using Ansible, how can I first make a task where I try to login as "foo" with "pass1", then if unsuccessful login with "pass2" and finally setting a fact with the correct vault value (depending on which password worked i.e. "foo" managed to login).
I then want to use that fact to perform additional tasks on that same host.
what I am trying to do is to register variables accordingdly. I am pinging an interface and then I try to verify if the ping was successful or not. However my code is not working the way I have imagined. I simplify it to following sample:
- name: Verify ping
ansible.builtin.shell: echo "yes it is online"
register: tmp
when: ping_output is search("5/5")
- name: Verify ping
ansible.builtin.shell: echo "no it is offline"
register: tmp
when: ping_output is search("0/5")
The variable ping_output was registered before in the previous tasks. Well what I have observed during my tests is that when I want to evaluate my variable tmp it is not set sometimes. This happens when the first conditional is true. Then tmp.stdout is set to "yes its online". Even if ansible is skipping the next conditional ("the else statement"), however it unsets the variable tmp.stdout and at some point I receive tmp.stdout is not defined. Do you have better suggestions how to solve it?
Thanks in advance
EDIT:
Hi, thanks for the review. I'm trying to store information into variables and yes right, later I reference to tmp.stdout. I actually ping from a Cisco device and save the output. There are several pings, so several outputs. Then there are tasks which evaluate these outputs. The goal is to prepare an appropriate description which I send via JSON as an API call. What the enduser sees in the end, is a description in an external trouble ticketing system with information like which interfaces were pingable, which were not etc
I'm working with the LetsEncrypt dns-01 challenge system which entails dynamically creating a TXT record in Google Cloud DNS with specific content, so LE can assert proof of ownership for generating a wildcard certificate (so I can't use http-01). The problem is sometimes LE tells me to create a TXT record that starts with a "-", for example -E_DFDFHJKF1783FSHDJ. I cannot get the gcloud cli to properly accept this data no matter what I do.
Example:
gcloud dns record-sets transaction start --zone=myzone
gcloud dns record-sets transaction add "-E_ASDFSDF" --ttl=30 --zone=myzone --name=test --type=TXT
gcloud dns record-sets transaction remove "-A_DSFKHSDF" --ttl=30 --zone=myzone --name=test2 --type=TXT
If you run those commands and inspect the resulting transaction.yaml you can see whether it properly contains the right string. If it did it correct, you should see something like:
- kind: dns#resourceRecordSet
name: test.
rrdatas:
- '"ASDFASDF"'
ttl: 30
type: TXT
I am executing this via Node's child_process, but I have the issue even if I execute it directly from bash, so Node isn't really meaningful issue at the moment. I've tried echoing the value in. I've tried setting an environment variable and using that in the string.
No matter what I do I get an error like the following:
ERROR: (gcloud.dns.record-sets.transaction.add) unrecognized arguments: -E_ASDFSDF
It turns out some characters need to be escaped in the CLI. I can confirm that the following works:
gcloud dns --project=myprojectid record-sets transaction add "\-test123" --name=test.mydomain.com. --ttl=300 --type=TXT --zone=myzoneid
Need to launch ad-hook commands like "-m ping" on existing ec2 instances, but it requred key-pair.
How to set key-pair for boto, like "aws_access_key_id" stored in ~/.aws/credentials ?
Also, have a problem invertory:
i got "invertory" folder near Ansible, where stored both local hosts and aws_ec2.yml file. But ansible-invertory --list works only for aws_ec2.yml file...
You can declare appropriate credentials and other host-sensitive variable right in the inventory file.
I.e.:
[ec2fleet]
35.... ansible_ssh_user=ec2-user
15.... ansible_ssh_user=ubuntu
[ec2fleet:vars]
ansible_user=deployer
ansible_ssh_private_key_file=/home/deployer/.ssh/deployer.pem
I am using Chef with kitchen (1.5.0) and vagrant (1.8.1) to manage a user consistently with a new server. My user recipe looks like this:
include_recipe "users"
group 'sudo'
password_secret = Chef::EncryptedDataBagItem.load_secret(node['enterprise_sp']['secret_file'])
jays_password = Chef::EncryptedDataBagItem.load('user_secrets','jgodse', password_secret)['password']
shadow_password = `openssl passwd -1 -salt xyz #{jays_password}`.strip
user 'jgodse' do
action :create
group 'sudo'
system true
shell '/bin/bash'
home '/home/jgodse'
manage_home true
password shadow_password #added to /etc/shadow when chef runs
end
The unencrypted data bag was where I configured my password in the clear. I then encrypted the data bag with a knife command.
This works, but this seems like a really dirty way around the problem of setting my password. I had to do that because the password directive of the user block only takes the shadow password, and that can only be generated by shelling out to an openssl command.
Is there a cleaner way of getting the shadow password without shelling out to an openssl command which generates the password?
You should not be storing the password at all, just hash it beforehand and put the hash in the data bag in the first place. Also using encrypted data bags like this is scary-level unsafe, please take some time to familiarize yourself with the threat model of Chef's encryption tools, this ain't it.
At least pre-calculate the password hash and put that into the data bag.
See https://github.com/chef-cookbooks/users for inspiration.