Terraform local-exec command scp fails - amazon-ec2

I am trying to copy directory to new ec2 instance using terraform
provisioner "local-exec" {
command = "scp -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ../ansible ubuntu#${self.public_ip}:~/playbook_dir"
}
But after instance created I get an error
Error running command 'sleep 5; scp -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o
│ UserKnownHostsFile=/dev/null -r ../ansible ubuntu#54.93.82.73:~/playbook_dir': exit status 1. Output:
│ ssh: connect to host 54.93.82.73 port 22: Connection refused
│ lost connection
The main thing is that if I copy command to terminal and replace IP it works. Why is that happens? Please, help to figure out
I read in documentation that sshd service may not work correctly right after creating, so I added sleep 5 command before scp, but it haven't work

I have tried the same in my local env, but unfortunately, when using the local-exec provisioner in aws_instance directly I also got the same error message and am honestly not sure of the details of it.
However, to workaround the issue you can use a null_resource with the local-exec provisioner with the same command including sleep and it works.
Terraform code
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_key_pair" "stackoverflow" {
key_name = "stackoverflow-key"
public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKml4tkIVsa1JSZ0OSqSBnF+0rTMWC5y7it4y4F/cMz6"
}
resource "aws_instance" "stackoverflow" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
subnet_id = var.subnet_id
vpc_security_group_ids = var.vpc_security_group_ids ## Must allow SSH inbound
key_name = aws_key_pair.stackoverflow.key_name
tags = {
Name = "stackoverflow"
}
}
resource "aws_eip" "stackoverflow" {
instance = aws_instance.stackoverflow.id
vpc = true
}
output "public_ip" {
value = aws_eip.stackoverflow.public_ip
}
resource "null_resource" "scp" {
provisioner "local-exec" {
command = "sleep 10 ;scp -i ~/.ssh/aws-stackoverflow -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ~/test/sub-test-dir ubuntu#${aws_eip.stackoverflow.public_ip}:~/playbook_dir"
}
}
Code In Action
aws_key_pair.stackoverflow: Creating...
aws_key_pair.stackoverflow: Creation complete after 0s [id=stackoverflow-key]
aws_instance.stackoverflow: Creating...
aws_instance.stackoverflow: Still creating... [10s elapsed]
aws_instance.stackoverflow: Still creating... [20s elapsed]
aws_instance.stackoverflow: Still creating... [30s elapsed]
aws_instance.stackoverflow: Still creating... [40s elapsed]
aws_instance.stackoverflow: Creation complete after 42s [id=i-006c17b995b9b7bd6]
aws_eip.stackoverflow: Creating...
aws_eip.stackoverflow: Creation complete after 1s [id=eipalloc-0019932a06ccbb425]
null_resource.scp: Creating...
null_resource.scp: Provisioning with 'local-exec'...
null_resource.scp (local-exec): Executing: ["/bin/sh" "-c" "sleep 10 ;scp -i ~/.ssh/aws-stackoverflow -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ~/test/sub-test-dir ubuntu#3.76.153.108:~/playbook_dir"]
null_resource.scp: Still creating... [10s elapsed]
null_resource.scp (local-exec): Warning: Permanently added '3.76.153.108' (ED25519) to the list of known hosts.
null_resource.scp: Creation complete after 13s [id=3541365434265352801]
Verification Process
Local directory and files
$ ls ~/test/sub-test-dir
some_test_file
$ cat ~/test/sub-test-dir/some_test_file
local exec is not nice !!
Files and directory on Created instance
$ ssh -i ~/.ssh/aws-stackoverflow ubuntu#$(terraform output -raw public_ip)
The authenticity of host '3.76.153.108 (3.76.153.108)' can't be established.
ED25519 key fingerprint is SHA256:8dgDXB/wjePQ+HkRC61hTNnwaSBQetcQ/10E5HLZSwc.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '3.76.153.108' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.15.0-1028-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Feb 11 00:25:13 UTC 2023
System load: 0.0 Processes: 98
Usage of /: 20.8% of 7.57GB Users logged in: 0
Memory usage: 24% IPv4 address for eth0: 172.31.6.219
Swap usage: 0%
* Ubuntu Pro delivers the most comprehensive open source security and
compliance features.
https://ubuntu.com/aws/pro
* Introducing Expanded Security Maintenance for Applications.
Receive updates to over 25,000 software packages with your
Ubuntu Pro subscription. Free for personal use.
https://ubuntu.com/aws/pro
Expanded Security Maintenance for Applications is not enabled.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu#ip-172-31-6-219:~$ cat ~/playbook_dir/some_test_file
local exec is not nice !!

Related

Ansible & AWS SSM connectivity/plugin & “ciphertext refers to a customer master key that does not exist”

Anyone able to get ansible's: ansible_connection: aws_ssm working?
AFAICT this should be a drop in replacement for ssh:
https://docs.ansible.com/ansible/latest/collections/community/aws/aws_ssm_connection.html
My playbook runs with ssh, but not ssm:
---
- name: Test command
gather_facts: false
hosts: all
vars:
ansible_connection: ssh
# ansible_connection: aws_ssm <--- this one no worky
ansible_aws_ssm_region: eu-central-1
tasks:
- name: test
command:
cmd: ls -l
Running using:
ansible-playbook -i inventory_aws_ec2.yml --limit nghc-sbox2-bastion test.yml -vvvv
I’m missing something on the ansible SSM config. The error is: (from /var/log/amazon/ssm/amazon-ssm-agent.log)
2021-08-10 23:48:51 INFO [ssm-session-worker]
[bruce.edge#xxx.com-04d88576fd5ec3ae7] [DataBackend]
[pluginName=Standard_Stream] Initiating Handshake 2021-08-10 23:48:54
ERROR [ssm-session-worker] [bruce.edge#xxx.com-04d88576fd5ec3ae7]
[DataBackend] [pluginName=Standard_Stream] Fetching data key failed:
Unable to retrieve data key, Error when decrypting data key
AccessDeniedException: The ciphertext refers to a customer master key
that does not exist, does not exist in this region, or you are not
allowed to access.
The ansible output is no more helpful:
<i-0c208bc6d31fa6bf1> EXEC stdout line:
<i-0c208bc6d31fa6bf1> EXEC stdout line: Starting session with SessionId: bruce.edge#xxx.com-0f7b6c9323afa74bc
<i-0c208bc6d31fa6bf1> EXEC remaining: 60
<i-0c208bc6d31fa6bf1> EXEC remaining: 59
<i-0c208bc6d31fa6bf1> EXEC stdout line:
<i-0c208bc6d31fa6bf1> EXEC stdout line:
<i-0c208bc6d31fa6bf1> EXEC stdout line: SessionId: bruce.edge#xxx.com-0f7b6c9323afa74bc :
<i-0c208bc6d31fa6bf1> EXEC stdout line: ----------ERROR-------
<i-0c208bc6d31fa6bf1> EXEC stdout line: Encountered error while initiating handshake. Fetching data key failed: Unable to retrieve data key, Error when decrypting data key AccessDeniedException: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.
<i-0c208bc6d31fa6bf1> EXEC stdout line: status code: 400, request id: 53549e47-03a1-4a1f-8f30-8f0c27482cc5
<i-0c208bc6d31fa6bf1> EXEC stdout line:
<i-0c208bc6d31fa6bf1> EXEC stdout line:
<i-0c208bc6d31fa6bf1> ssm_retry: attempt: 0, caught exception(local variable 'returncode' referenced before assignment) from cmd (echo ~...), pausing for 0 seconds
<i-0c208bc6d31fa6bf1> CLOSING SSM CONNECTION TO: i-0c208bc6d31fa6bf1
<i-0c208bc6d31fa6bf1> TERMINATE SSM SESSION: bruce.edge#xxx.com-0f7b6c9323afa74bc
<i-0c208bc6d31fa6bf1> ESTABLISH SSM CONNECTION TO: i-0c208bc6d31fa6bf1
<i-0c208bc6d31fa6bf1> SSM COMMAND: ['/usr/local/bin/session-manager-plugin', '{"SessionId": "bruce.edge#xxx.com-0d95f1030d63fa155", "TokenValue": "......Gsoj8bEu3d9s=", "StreamUrl": "wss://ssmmessages.eu-central-1.amazonaws.com/v1/data-channel/bruce.edge#xxx.com-0d95f1030d63fa155?role=publish_subscribe", "ResponseMetadata": {"RequestId": "8d20fbe9-d3d2-44e7-a832-a1d4d86861a9", "HTTPStatusCode": 200, "HTTPHeaders": {"server": "Server", "date": "Wed, 11 Aug 2021 00:43:13 GMT", "content-type": "application/x-amz-json-1.1", "content-length": "651", "connection": "keep-alive", "x-amzn-requestid": "8d20fbe9-d3d2-44e7-a832-a1d4d86861a9"}, "RetryAttempts": 0}}', 'eu-central-1', 'StartSession', '', '{"Target": "i-0c208bc6d31fa6bf1"}', 'https://ssm.eu-central-1.amazonaws.com']
<i-0c208bc6d31fa6bf1> SSM CONNECTION ID: bruce.edge#xxx.com-0d95f1030d63fa155
<i-0c208bc6d31fa6bf1> EXEC echo ~
<i-0c208bc6d31fa6bf1> _wrap_command: 'echo QTPJHrIizAXitS...
My SSM is setup correctly for other functionality.
I’m able to ssh over ssm and run remote playbooks via ssm, just not use the:
ansible_connection: aws_ssm
connection mechanism.
Don't disable KMS encryption as some SSM services won't work.
The right solution is to go to Key Management Service (KMS), select Customer managed keys and select the key you are using.
There you can add the role that your EC2 instances are using as users to that key.
Disabling KMS encryption in the SSM config fixes this issue:
(AWS console -> system manager -> session manager -> preferences tab)
Addendum re: KMS
May be justified and the lack of appropriate error messages is just super-misleading, see AWS docs on this:
https://aws.amazon.com/premiumsupport/knowledge-center/ssm-session-manager-failures/ ie: should work with KMS enabled, just requires making the key accessible for both the anisble user and the target(s), which is obvious when you think about it. Just super-non-obvious from the error. Have not tested
And... need to reconfigure dash NOT to be the default:
sudo dpkg-reconfigure dash
or, for ansible fans:
# See "/var/cache/debconf/config.dat" for name of config item after changing manually
- name: aws-ssm ansible plugin fails if dash is the default shell
ansible.builtin.debconf:
name: dash/sh
question: dash/sh
value: false
vtype: boolean

Why is userdata not working in my Terraform code?

I am working with Terraform and trying to execute bash script using user date. Below is my code:
resource "aws_instance" "web_server" {
ami = var.centos
instance_type = var.instance-type
subnet_id = aws_subnet.private.id
private_ip = var.web-private-ip
associate_public_ip_address = true
user_data = <<-EOF
#!/bin/bash
yum install httpd -y
echo "hello world" > /var/www/html/index.html
yum update -y
systemctl start httpd
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --reload
EOF
}
However, when I navigate to the public IP I do not see the "hello world" message and also do not get a response fron the server. Is there something I am missing here? I've tried going straight through the aws console and user data is unsuccesful there to.
I verified your user data on my centos instance and your script is correct. However, the issue is probably because of two things:
subnet_id = aws_subnet.private.id this suggest that you've placed your instance in a private subnet. To connect to your instance form internet, it must be in public subnet
there is no vpc_security_group_ids specified, which leads to using a default SG from the VPC, which has internet traffic blocked by default.
Also I'm not sure what do you want to do with private_ip = var.web-private-ip. Its confusing.

SSH Connection without password is not working

I am new to hadoop. Trying to connect namenode and data node through ssh. But I am not able to access ssh without password even though i have setup public key
Below is the sshd config.
# Package generated configuration file
# See the sshd_config(5) manpage for details
# What ports, IPs and protocols we listen for
Port 22
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0
Protocol 2
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes
# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 1024
# Logging
SyslogFacility AUTH
LogLevel INFO
# Authentication:
LoginGraceTime 120
PermitRootLogin prohibit-password
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile %h/.ssh/authorized_keys
# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes
# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no
# Change to no to disable tunnelled clear text passwords
#PasswordAuthentication yes
# Kerberos options
#KerberosAuthentication no
#KerberosGetAFSToken no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no
#MaxStartups 10:30:60
#Banner /etc/issue.net
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes
Please let me know how to fix the issue. I have tried several solution available in google but none of them are working. Please help
Try to follow these steps below.
Change to root user
arif#ubuntu:~$sudo -s
Recreate the SSH directory
root#ubuntu:~# cd ~
root#ubuntu:/# sudo rm -rf .ssh
root#ubuntu:/# ls -l .ssh
ls: cannot access .ssh: No such file or directory
root#ubuntu:/# mkdir .ssh
root#ubuntu:/# chmod 700 .ssh
Create authorized_key file
root#ubuntu:/# touch .ssh/authorized_keys
root#ubuntu:/# chmod 600 .ssh/authorized_keys
Generate a passwordless key
root#ubuntu:/# ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
4f:f1:83:ad:03:ed:82:46:fa:11:ec:74:10:bf:03:41 root#ubuntu
The key's randomart image is:
+--[ RSA 2048]----+
| .E |
| + |
| o . . |
| . o o = |
| = S + + |
| = + * . . |
| . = . = |
| o . . . |
| . |
+-----------------+
Copy that key to other servers
Also, copy to localhost
root#ubuntu:/# cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
root#ubuntu:/# chmod -R 750 /root/.ssh/authorized_keys
Test your key
root#ubuntu:/# ssh localhost
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
87:21:89:ac:cd:ce:bf:32:30:d6:d2:a2:dc:ff:6d:ad.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:1
remove with: ssh-keygen -f "/root/.ssh/known_hosts" -R localhost
ECDSA host key for localhost has changed and you have requested strict checking.
Host key verification failed.
Following the instructions mentioned in the above error
root#ubuntu:/# ssh-keygen -f "/root/.ssh/known_hosts" -R localhost
# Host localhost found: line 1 type ECDSA
/root/.ssh/known_hosts updated.
Original contents retained as /root/.ssh/known_hosts.old
Testing again
root#ubuntu:/# ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is 87:21:89:ac:cd:ce:bf:32:30:d6:d2:a2:dc:ff:6d:ad.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-31-generic x86_64)
* Documentation: https://help.ubuntu.com/
New release '16.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Wed Dec 20 07:13:15 2017 from localhost
Now, start Hadoop
root#ubuntu:/# cd $HADOOP_HOME
root#ubuntu:~/applications/hadoop/hadoop-2.9.0# sbin/start-all.sh
now always use the sudo -s root user login, before starting the hadoop cluster sbin/start-all or sbin/stop-all, otherwise you have to first mention yes and later provide the password for five times.

ansible:Failed to connect to the host via ssh: Warning: Permanently added '10.90.0.2' (ECDSA) to the list of known hosts.\r\nPermission denied

i want to use the key login some host, but some err happened
my files is this below:
[jenkins#ci-jenkins-slave-dev test]$ ls
ansible.cfg hosts test.yml
my hosts file:
[jenkins#ci-jenkins-slave-dev test]$ cat hosts
[controller]
10.90.0.2 ssh_key_pass=passw0rd ansible_ssh_user=root
my playbook:
[jenkins#ci-jenkins-slave-dev test]$ cat test.yml
---
- name: test
hosts: controller
tasks:
- name: add key
authorized_key:
user: root
key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
when run playbook :
[jenkins#ci-jenkins-slave-dev test]$ ansible-playbook test.yml
PLAY [test] ******************************************************************************************************************************************************************************************************************************************************************
TASK [add key] ***************************************************************************************************************************************************************************************************************************************************************
fatal: [10.90.0.2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '10.90.0.2' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n", "unreachable": true}
to retry, use: --limit #/home/jenkins/ansible-test/test/test.retry
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************
10.90.0.2 : ok=0 changed=0 unreachable=1 failed=0
I can use "ssh root#10.90.0.2 "and input "passw0rd" to login but ansible can't ,i want to know what's wrong ?
my ansible.cfg :
[jenkins#ci-jenkins-slave-dev test]$ cat ansible.cfg
# config file for ansible -- http://ansible.com/
# ==============================================
# nearly all parameters can be overridden in ansible-playbook
# or with command line flags. ansible will read ANSIBLE_CONFIG,
# ansible.cfg in the current working directory, .ansible.cfg in
# the home directory or /etc/ansible/ansible.cfg, whichever it
# finds first
[defaults]
# some basic default values...
hostfile = ./hosts
library = /usr/share/ansible
remote_tmp = $HOME/.ansible/tmp
pattern = *
forks = 5
poll_interval = 15
sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
transport = smart
remote_port = 22
module_lang = C
# plays will gather facts by default, which contain information about
# the remote system.
#
# smart - gather by default, but don't regather if already gathered
# implicit - gather by default, turn off with gather_facts: False
# explicit - do not gather by default, must say gather_facts: True
gathering = explicit
# additional paths to search for roles in, colon separated
#roles_path = /etc/ansible/roles
# uncomment this to disable SSH key host checking
host_key_checking = False
# change this for alternative sudo implementations
sudo_exe = sudo
# what flags to pass to sudo
#sudo_flags = -H
# SSH timeout
timeout = 10
# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
remote_user = root
# logging is off by default unless this path is defined
# if so defined, consider logrotate
#log_path = /var/log/ansible.log
# default module name for /usr/bin/ansible
#module_name = command
# use this shell for commands executed under sudo
# you may need to change this to bin/bash in rare instances
# if sudo is constrained
#executable = /bin/sh
# if inventory variables overlap, does the higher precedence one win
# or are hash values merged together? The default is 'replace' but
# this can also be set to 'merge'.
#hash_behaviour = replace
# list any Jinja2 extensions to enable here:
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n
# if set, always use this private key file for authentication, same as
# if passing --private-key to ansible or ansible-playbook
private_key_file = ~/.ssh/id_rsa
# format of string {{ ansible_managed }} available within Jinja2
# templates indicates to users editing templates files will be replaced.
# replacing {file}, {host} and {uid} and strftime codes with proper values.
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
# by default, ansible-playbook will display "Skipping [host]" if it determines a task
# should not be run on a host. Set this to "False" if you don't want to see these "Skipping"
# messages. NOTE: the task header will still be shown regardless of whether or not the
# task is skipped.
#display_skipped_hosts = True
# by default (as of 1.3), Ansible will raise errors when attempting to dereference
# Jinja2 variables that are not set in templates or action lines. Uncomment this line
# to revert the behavior to pre-1.3.
#error_on_undefined_vars = False
# by default (as of 1.6), Ansible may display warnings based on the configuration of the
# system running ansible itself. This may include warnings about 3rd party packages or
# other conditions that should be resolved if possible.
# to disable these warnings, set the following value to False:
#system_warnings = True
# by default (as of 1.4), Ansible may display deprecation warnings for language
# features that should no longer be used and will be removed in future versions.
# to disable these warnings, set the following value to False:
#deprecation_warnings = True
# set plugin path directories here, separate with colons
action_plugins = /usr/share/ansible_plugins/action_plugins
callback_plugins = /usr/share/ansible_plugins/callback_plugins
connection_plugins = /usr/share/ansible_plugins/connection_plugins
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins
vars_plugins = /usr/share/ansible_plugins/vars_plugins
filter_plugins = /usr/share/ansible_plugins/filter_plugins
# don't like cows? that's unfortunate.
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1
#nocows = 1
# don't like colors either?
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1
#nocolor = 1
# the CA certificate path used for validating SSL certs. This path
# should exist on the controlling node, not the target nodes
# common locations:
# RHEL/CentOS: /etc/pki/tls/certs/ca-bundle.crt
# Fedora : /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
# Ubuntu : /usr/share/ca-certificates/cacert.org/cacert.org.crt
#ca_file_path =
# the http user-agent string to use when fetching urls. Some web server
# operators block the default urllib user agent as it is frequently used
# by malicious attacks/scripts, so we set it to something unique to
# avoid issues.
#http_user_agent = ansible-agent
[paramiko_connection]
# uncomment this line to cause the paramiko connection plugin to not record new host
# keys encountered. Increases performance on new host additions. Setting works independently of the
# host key checking setting above.
record_host_keys=False
# by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this
# line to disable this behaviour.
#pty=False
[ssh_connection]
# ssh arguments to use
# Leaving off ControlPersist will result in poor performance, so use
# paramiko on older platforms rather than removing it
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null
# The path to use for the ControlPath sockets. This defaults to
# "%(directory)s/ansible-ssh-%%h-%%p-%%r", however on some systems with
# very long hostnames or very long path names (caused by long user names or
# deeply nested home directories) this can exceed the character limit on
# file socket names (108 characters for most platforms). In that case, you
# may wish to shorten the string below.
#
# Example:
# control_path = %(directory)s/%%h-%%r
#control_path = %(directory)s/ansible-ssh-%%h-%%p-%%r
# Enabling pipelining reduces the number of SSH operations required to
# execute a module on the remote server. This can result in a significant
# performance improvement when enabled, however when using "sudo:" you must
# first disable 'requiretty' in /etc/sudoers
#
# By default, this option is disabled to preserve compatibility with
# sudoers configurations that have requiretty (the default on many distros).
#
#pipelining = False
# if True, make ansible use scp if the connection type is ssh
# (default is sftp)
#scp_if_ssh = True
[accelerate]
accelerate_port = 5099
accelerate_timeout = 30
accelerate_connect_timeout = 5.0
# The daemon timeout is measured in minutes. This time is measured
# from the last activity to the accelerate daemon.
accelerate_daemon_timeout = 30
# If set to yes, accelerate_multi_key will allow multiple
# private keys to be uploaded to it, though each user must
# have access to the system via SSH to add a new key. The default
# is "no".
#accelerate_multi_key = yes
If you need any additional information, please let me know, and I will add
I also faced the same problem. I missed a step.
ssh-copy-id localhost
Then you can successfully run,
ansible-playbook -i hosts simple-docker-project.yml --check
The variable for password is ansible_ssh_pass, but you use ssh_key_pass.
Try with this inventory:
[controller]
10.90.0.2 ansible_ssh_pass=passw0rd ansible_ssh_user=root

knife vsphere requests root password - is unattended execution possible?

Is there any way to ruyn the knife vsphere for unattended execution? I have a deploy shell script which I am using to help me:
cat deploy-production-20-vm.sh
#!/bin/bash
##############################################
# These are machine dependent variables (need to change)
##############################################
HOST_NAME=$1
IP_ADDRESS="$2/24"
CHEF_BOOTSTRAP_IP_ADDRESS="$2"
RUNLIST=\"$3\"
CHEF_HOST= $HOSTNAME.my.lan
##############################################
# These are psuedo-environment independent variables (could change)
##############################################
DATASTORE="dcesxds04"
##############################################
# These are environment dependent variables (should not change per env)
##############################################
TEMPLATE="\"CentOS\""
NETWORK="\"VM Network\""
CLUSTER="ProdCluster01" #knife-vsphere calls this a resource pool
GATEWAY="10.7.20.1"
DNS="\"10.7.20.11,10.8.20.11,10.6.20.11\""
##############################################
# the magic
##############################################
VM_CLONE_CMD="knife vsphere vm clone $HOST_NAME \
--template $TEMPLATE \
--cips $IP_ADDRESS \
--vsdc MarkleyDC\
--datastore $DATASTORE \
--cvlan $NETWORK\
--resource-pool $CLUSTER \
--cgw $GATEWAY \
--cdnsips $DNS \
--start true \
--bootstrap true \
--fqdn $CHEF_BOOTSTRAP_IP_ADDRESS \
--chost $HOST_NAME\
--cdomain my.lan \
--run-list=$RUNLIST"
echo $VM_CLONE_CMD
eval $VM_CLONE_CMD
Which echos (as a single line):
knife vsphere vm clone dcbsmtest --template "CentOS" --cips 10.7.20.84/24
--vsdc MarkleyDC --datastore dcesxds04 --cvlan "VM Network"
--resource-pool ProdCluster01 --cgw 10.7.20.1
--cdnsips "10.7.20.11,10.8.20.11,10.6.20.11" --start true
--bootstrap true --fqdn 10.7.20.84 --chost dcbsmtest --cdomain my.lan
--run-list="role[my-env-prod-server]"
When it runs it outputs:
Cloning template CentOS Template to new VM dcbsmtest
Finished creating virtual machine dcbsmtest
Powered on virtual machine dcbsmtest
Waiting for sshd...done
Doing old-style registration with the validation key at /home/me/chef-repo/.chef/our-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 10.7.20.84
root#10.7.20.84's password:
If I step away form my desk and it prompts for PWD - then sometimes it times out and the connection is lost and chef doesn't bootstrap. Also I would like to be able to automate all of this to be elastic based on system needs - which won't work with attended execution.
The idea I am going to run with, unless provided a better solution is to have a default password in the template and pass it on the command line to knife, and have chef change the password once the build is complete, minimizing the exposure of a hard coded password in the bash script controlling knife...
Update: I wanted to add that this is working like a charm. Ideally we could have changed the centOs template we were deploying - but it wasn't possible here - so this is a fine alternative (as we changed the root password after deploy anyhow).

Resources