Please find the below attachment. I am working on the Ansible integration with OpenStack. While running with playbook I got the below connection failed error.
) I am using nova_compute module in ansible. While running the ansible playbook it gave pyhton_novaclient module required. I installed the python_novaclient using pip.
But after I ran the playbook, it gave the same python_novaclient required error.
) After the installation of python_novaclient, nova command is working fine.
I checked the below command
Root>nova --os-username admin --os-password MiracleIT --os-project-name admin --os-auth-URL http://192.168.8.25:5000/v3/ service-list
It gave below attached error
attachment: 2
attachment 2
) I am using on os_server module in ansible. While working on the ansible playbook, it gave RegionOne (Inner Exception: problem with authorization parameters) error. I attached the screen shot.
Attachment: 3
I got the Answer well. I did not enter the Controller IP into /etc/hosts file.
After put the IP into the /etc/hosts My playbook create an instance in Openstack and working fine.
Thank you.
Related
Currently using Ansible Controller (Ansible Automation Platform v4.2). I can import virtual machines from our VMWare environment with no issue. However, when I add "with_tags: true" to my inventory script I get the following error when running the inventory sync:
[WARNING]: * Failed to parse /runner/inventory/vmware_vm_inventory.yml with
auto plugin: Unable to find 'vSphere Automation SDK' Python library which is
required. Please refer this URL for installation steps -
https://code.vmware.com/web/sdk/7.0/vsphere-automation-python
I have checked what documentation I can find and ran the following to download the prequisites:
pip install -r ~/.ansible/collections/ansible_collections/community/vmware/requirements.txt
This appears to install everything fine including vSphere Automation SDK.
Any ideas? Has anyone successfully setup their Ansible Controller to import vmware tags?
Thanks
View vmware tags when running inventory sync.
Problem
I'm trying to connect to a few of my Linux EC2 Instances and I'm getting this weird behavior depending on how I'm connecting to it.
Terminal
If I try to connect to it from the terminal using the following command:
ssh -i "<PATH_TO_PRIVATE_KEY_FILE>" ec2-user#<PRIVATE_IP_ADDRESS>
I'm able to connect successfully.
Visual Studio Code Remote Explorer
I am able to connect to the instance successfully.
Paramiko
# Create a new connection.
ssh_conn = paramiko.SSHClient()
ssh_conn.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Load the Private Key.
my_rsa_key = paramiko.RSAKey.from_private_key_file(key_file)
# Connect to the server.
_session = ssh_conn.connect(
hostname=host,
port=port,
username=username,
pkey=my_rsa_key,
timeout=5
)
Here I get a timeout error. Here I'm confident the code is fine because it works with some instances but not others and the issue seem to always be the connection portion.
Ansible
When I try to connect to the same EC2 instance using Ansible I get Permission denied (publickey). Now I can confidently say it's not a syntax error inside of the Ansible code because when I run the same code on a few different EC2 instances it runs fine without problem. The issue is only related to the connection process.
Thoughts?
The behavior is limited to a few instances and it's always the same issue. What would cause behavior like this or how could I go about trying to diagnose the problem? I'm happy to add more detail but I wanted to start here and see what people thought.
This error (publickey.) basically means that Ansible has no idea which remote user it should connect to. That's why it works when you explicitly call ssh command and doesn't using Ansible or other tools.
Regarding Ansible, add this line of code into the inventory file:
your_host_group:
vars:
ansible_user: ec2-user
hosts:
your_hostname:
I have the following set-up:
A script updatecreds.py runs, which updates AWS credentials in my Ansible creds file using STS.
Now, I took those creds to run AWS-related tasks in Ansible, and they run smoothly. But, the CLI commands give me an error.
When I use the same credentials in the ~/.aws/config file, I get the following error when executing CLI commands: A client error (InvalidClientTokenId) occurred when calling the ListAccessKeys operation: The security token included in the request is invalid.
As some of my Ansible tasks run shell commands which are AWS cli commands, this behaviour is messing with my Ansible run too.
Why is AWS behaving so weirdly? Or did I do something wrong here?
PS : My ~/.aws/config looks like this:
[default]
aws_access_key_id=<>
aws_secret_access_key=<>
aws_session_token=<>
region=us-east-1
There is a confusion in session/security terms, see this issue.
To make both boto and aws cli work correctly, duplicate them:
[default]
aws_access_key_id=KEY
aws_secret_access_key=SECRET
aws_session_token=TOKEN
aws_security_token=TOKEN
region=REGION
Try without
aws_session_token=<>
Mine works fine with only aws_access_key_id and aws_secret_access_key in ~/.aws.credentials
I have created two Ubuntu machines on virtual box. I am able to ping the other machine from the terminal of the other.
However when I ping from ansible I get the following error.
My /etc/ansible/hosts file is :
Can I get the solution for this ?
If you read the documentation you will notice:
This is NOT ICMP ping
So the way in which the ping command works and the way in which Ansible module works is different.
Reading further, Ansible ping module is described as:
Try to connect to host, verify a usable Python and return pong on success.
So Ansible tries to connect (and the default connection method is SSH) and execute Python code.
In your case Ansible failed to connect.
SSH connectivity is a prerequisite, so you need to configure that before you'll be able to use Ansible. For Ubuntu 16.04 you might need to additionally install OpenSSH.
Refer to the official guide for the installation and configuration steps.
On top of that, Ubuntu Server 16.04 does not install Python 2 by default, so you need to manually add it (Ansible support for Python 3 is still experimental).
Refer to answers under this question on AskUbuntu.
Then you still might need to set a parameter in the inventory file to tell Ansible to use Python 2. Or make Python 2 the default interpreter.
I'm using Vagrant and Ansible to create my Bitbucket Server on Ubuntu 15.10. I have the server setup complete and working but I have to manually run the start-webapp.sh script to start the server each time I reprovision the server.
I have the following task in my Bitbucket role in Ansible and when I increase the verbosity I can see that I get a positive response from the server saying it will be running at http://localhost/ but when I go to the URL the server isn't on. If I then SSH in to the server and run the script myself, getting the exact same response after running the script I can see the startup webpage.
- name: Start the Bitbucket Server
become: yes
shell: /bitbucket-server/atlassian-bitbucket-4.7.1/bin/start-webapp.sh
Any advice would be great on how to fix this.
Thanks,
Sam
Probably better to change that to an init script and use the service module to start it. For example, see this role for installing bitbucket...
Otherwise, you're subject to HUP and other issues from running processes under an ephemeral session.