Understanding EC2 credentials when provisioning server - amazon-ec2

I'm using Ansible to provision EC2 servers. Here's what I've got so far:
- name: Launch instances
local_action:
module: ec2
key_name: my-key
aws_access_key: ***
aws_secret_key: ***
region: us-west-1
group: management
instance_type: m1.small
image: ami-8635a9b6
count: 2
wait: yes
register: ec2
But I am not authenticating:
You are not authorized to perform this operation.
I imagine its because I don't fully comprehend how the credentials work. I can see in the EC2 console that my-key is the key name for the instance I'm running in (the ansible server), and I know the access_key and secret_key are correct.
I think this is more my not understanding the key_name/keypair and how it works/how to install it, rather than anything related directly to ansible.
Or perhaps this has more to do with the user. I'm running the script as root.
Here is the log:
TASK: [Launch instances] ******************************************************
<127.0.0.1> REMOTE_MODULE ec2 image=ami-8635a9b6 ec2_secret_key=*** ec2_access_key=*** instance_type=m1.small region=us-west-1 key_name=ca-management group=management
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1417702041.0-138277713680589 && echo $HOME/.ansible/tmp/ansible-tmp-1417702041.0-138277713680589']
<127.0.0.1> PUT /tmp/tmpFgUh1O TO /root/.ansible/tmp/ansible-tmp-1417702041.0-138277713680589/ec2
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1417702041.0-138277713680589/ec2; rm -rf /root/.ansible/tmp/ansible-tmp-1417702041.0-138277713680589/ >/dev/null 2>&1']
failed: [127.0.0.1 -> 127.0.0.1] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1417702041.0-138277713680589/ec2", line 2959, in <module>
main()
File "/root/.ansible/tmp/ansible-tmp-1417702041.0-138277713680589/ec2", line 1191, in main
(instance_dict_array, new_instance_ids, changed) = create_instances(module, ec2)
File "/root/.ansible/tmp/ansible-tmp-1417702041.0-138277713680589/ec2", line 761, in create_instances
grp_details = ec2.get_all_security_groups()
File "/usr/lib/python2.6/site-packages/boto/ec2/connection.py", line 2969, in get_all_security_groups
[('item', SecurityGroup)], verb='POST')
File "/usr/lib/python2.6/site-packages/boto/connection.py", line 1182, in get_list
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>UnauthorizedOperation</Code><Message>You are not authorized to perform this operation.</Message></Error></Errors><RequestID>f3b9044b-9f41-44dd-9d5e-b7b13215c14a</RequestID></Response>
FATAL: all hosts have already failed -- aborting
embarassingly, it turned out IT gave me the wrong user. Switched to correct user with permissions and voila, it worked. Keeping the question for the useful answers below.

local_action:
module: ec2
ec2_access_key: ***
ec2_secret_key: ***
This varies from what the documentation says. Here are the proper key names.
local_action:
module: ec2
aws_access_key: ***
aws_secret_key: ***

The error You are not authorized to perform this operation. is a result of the access/privileges you have been assigned in AWS IAM. I am not sure about the ansible part, however, check what permission/policy is allowed/denied on your username in your AWS account.
Also, you can try launching an instance from AWS console and you will receive similar error there as well.

Related

how can I use ansible to push playbooks with SSH keys authentification

I am new to ansible and try to push playbooks to my nodes. I would like to push via ssh-keys. Here is my playbook:
- name: nginx install and start services
hosts: <ip>
vars:
ansible_ssh_private_key_file: "/path/to/.ssh/id_ed25519"
become: true
tasks:
- name: install nginx
yum:
name: nginx
state: latest
- name: start service nginx
service:
name: nginx
state: started
Here is my inventory:
<ip> ansible_ssh_private_key_file=/path/to/.ssh/id_ed25519
before I push, I check if it works: ansible-playbook -i /home/myuser/.ansible/hosts nginx.yaml --check
it gives me:
fatal: [ip]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: user#ip: Permission denied (publickey,password).", "unreachable": true}
On that server I don't have root privileges, I cant do sudo. That's why I use my own inventory in my home directory. To the target node where I want to push that nginx playbook, I can do a SSH connection and perform a login. The public key is on the remote server in /home/user/.ssh/id_ed25119.pub
What am i missing?
Copy /etc/ansible/ansible.cfg into the directory from which you are running the nginx.yaml playbook, or somewhere else per the documentation: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-configuration-settings-locations
Then edit that file to change this line:
#private_key_file = /path/to/file
to read:
private_key_file = /path/to/.ssh/id_ed25519
Also check the remote user_user entry.

ansible playbook: Cannot launch a service as root

I've been banging my head on this one for most of the day, I've tried everything I could without success, even with the help of my sysadmin. (note that I am not at all an ansible expert, I've discovered that today)
context: I try to run implement continuous integration of a java service via gitlab. a pipeline will, on a push, run tests, package the jar, then run an ancible playbook to stop the existing service, replace the jar, launch the service again. We have that for the production in google cloud, and it works fine. I'm trying to add an extra step before that, to do the same on localhost.
And I just can't understand why ansible fails to do a "sudo service XXXX stop|start" . All I got is
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Sorry, try again.\n[sudo via ansible, key=nbjplyhtvodoeqooejtlnhxhqubibbjy] password: \nsudo: 1 incorrect password attempt\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Here is the the gitlab pipeline stage that I call :
indexer-integration:
stage: deploy integration
script:
- ansible-playbook -i ~/git/ansible/inventory deploy_integration.yml --vault-password-file=/home/gitlab-runner/vault.txt
when: on_success
vault.txt contains the vault encryption password. Here is the deploy_integration.yml
---
- name: deploy integration saleindexer
hosts: localhost
gather_facts: no
user: test-ccc #this is the user that I created as a test
connection: local
vars_files:
- /home/gitlab-runner/secret.txt #holds the sudo password
tasks:
- name: Stop indexer
service: name=indexer state=stopped
become: true
become_user: root
- name: Clean JAR
become: true
become_user: root
file:
state: absent
path: '/PATH/indexer-latest.jar'
- name: Copy JAR
become: true
become_user: root
copy:
src: 'target/indexer-latest.jar'
dest: '/PATH/indexer-latest.jar'
- name: Start indexer
service: name=indexer state=started
become: true
become_user: root
the user 'test-ccc' is another user that I created ( part of the group root and in the sudoer file) to make sure it was not an issue related to the gitlab-runner user ( and because apparently no one here can remembers the sudo password of that user xD )
I've try a lot od thing, including
shell: echo 'password' | sudo -S service indexer stop
that works in command line. But if executed by ansible, all I got is a prompt message asking me to enter the sudo password
Thanks
edit per comment request : The secret.txt has :
ansible_become_pass: password
When using that user in command line (su user / sudo service start ....) and prompted for that password, it works fine. The problem I believe is that either ansible always prompts for password, or the password is not properly passed to the task.
The sshd_config has a line 'PermitRootLogin yes'
ok, thanks to a reponse(now deleted) from techraf, I noticed that the line
user: test-ccc
is actually useless, everything was still run by the 'gitlab-runner' user. So I :
put all my action in a script postbuild.sh
add gitlab-runners to the sudoers and gave the nopassword for that script
gitlab-runner ALL=(ALL) NOPASSWD:/home/PATH/postbuild.sh
removed everrything about passing the password and the secret from the ansible task, and used instead :
shell: sudo -S /home/PATH/postbuild.sh
So that works, the script is executed, service is stop/start. I'll mark this as answered, even though using service: name=indexer state=started and giving NOPASSWD:ALL for the user still caused an error (the one in my comment on the question ) . If anyone can shed light on that in the comment ....

How can a user with SSH keys authentication have sudo powers in Ansible? [duplicate]

This question already has answers here:
Missing sudo password in Ansible
(14 answers)
Ansible: sudo without password
(3 answers)
Closed 4 years ago.
I create a vm in the azure cloud with the following ansible script:
---
- name: azure playbook
hosts: localhost
vars_files: ['vars.yaml']
tasks:
- name: Create VM with defaults
azure_rm_virtualmachine:
resource_group: "{{account_prefix}}_rg"
vm_size: Standard_D1
name: "{{account_prefix}}-vm1"
storage_account_name: "{{account_prefix}}store1"
network_interface_names: "{{account_prefix}}vm1eth0"
ssh_password_enabled: false
admin_username: owen
ssh_public_keys:
- { path: /home/owen/.ssh/authorized_keys,
key_data: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDH0q4pmdkJcc/JPVJui5uWMV12GsJAsDCosfUSSFZfTIx92bb9FC3hx1zU7tD1+Zw3aQW13m6ZS2T ... YnvieSbdD3v}
image:
offer: CentOS
publisher: OpenLogic
sku: '7.2'
version: latest
but when running a further script to add another user:
---
- name: create user
hosts: my-vm1.westeurope.cloudapp.azure.com
# vars_files: ['vars.yaml']
remote_user: owen
tasks:
- name: Create User
user:
name: andrea
password: $6$rounds=656000$1AspdTb0lfOSc5yM$bAkPgHkuHwap/j6f0P88WxOdjxq3MCRO7/qgufYB.s/4t4k99wwtu/.../
group: users
shell: /bin/bash
become: true
I get "sudo: a password is required" error:
PLAY [create user] *************************************************************
TASK [setup] *******************************************************************
fatal: [my-vm1.westeurope.cloudapp.azure.com]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #8-add-admin-user-to-vm-with-userpswd-already.retry
My inventory looks like this:
my-vm1.westeurope.cloudapp.azure.com ansible_ssh_private_key_file=/home/myuser/.ssh/id_rsa ansible_user=owen ansible_become=true
So how can the user have sudo privileges and so use ansible 'become' and the like?
Note that the same result happens when ansible_user and ansible_become are omitted from the inventory file.
EDIT: If I ssh on to the vm as owen (from the box with the ssh private key, that created the vm) then I am able to run sudo visudo -f /etc/sudoers and access that file. So does owen have sudo privileges or not? I'm getting confused now!! Am I misunderstanding the error from the ansible add user script?
EDIT2: I think this question is invalid - as the user does have sudo privileges added manually through the portal. I'm still not sure what's going on but I don't think this question is coherent - or really represents the actual problem I'm trying to solve.
You can either change the sudo config for the user owen with this command:
sudo visudo -f /etc/sudoers
and change the line with user owen to this:
owen ALL=(ALL) NOPASSWD:ALL
then sudo won't require Ansible to enter the password. Or you could instruct Ansible to ask you for the password with the parameter --ask-become-pass like this:
ansible-playbook site.yml --ask-become-pass

Ansible Service Restart Failed

I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart

Having trouble provisioning EC2 instances using Ansible

I'm very confused on how you are supposed to launch EC2 instances using Ansible.
I'm trying to use the ec2.py inventory scripts. I'm not sure which one is supposed to be used, because there is three installed with Ansible:
ansible/lib/ansible/module_utils/ec2.py
ansible/lib/ansible/modules/core/cloud/amazon/ec2.py
ansible/plugins/inventory/ec2.py
I thought running the one in inventory/ would make most sense, so I run it using:
ansible-playbook launch-ec2.yaml -i ec2.py
which gives me:
msg: Either region or ec2_url must be specified
So I add a region (even though I have a vpc_subnet_id specified) and I get:
msg: Region us-east-1e does not seem to be available for aws module boto.ec2. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path
I'm thinking Amazon must have recently changed ec2 so you need to use a VPC? Even when I try and launch an instance from Amazon's console, the option for "EC2 Classic" is disabled.
When I try and use the ec2.py script in cloud/amazon/ I get:
ERROR: Inventory script (/software/ansible/lib/ansible/modules/core/cloud/amazon/ec2.py) had an execution error:
There are no more details than this.
After some searching, I see that ec2.py module in /module_utils has been changed so a region doesn't need to be specified. I try to run this file but get:
ERROR: The file /software/ansible/lib/ansible/module_utils/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this with chmod -x /software/ansible/lib/ansible/module_utils/ec2.py.
So as the error suggests, I remove the executable permissions for the ec2.py file, but then get the following error:
ERROR: /software/ansible/lib/ansible/module_utils/ec2.py:30: Invalid ini entry: distutils.version - need more than 1 value to unpack
Does anyone have any ideas on how to get this working? What is the correct file to be using? I'm completely lost at this point on how to get this working.
There are several questions in your post. I'll try to summarise them in three items:
Is it still possible to launch instances in EC2 Classic (no VPC)?
How do I create a new EC2 instance using Ansible?
How to launch the dynamic inventory file ec2.py?
1. EC2 Classic
Your options will differ depending on when did you create your AWS account, the type of instance and the AMI virtualisation type used. Refs: aws account,instance type.
If none of the above parameters restricts the usage of EC2 classic you should be able to create a new instance without defining any VPC.
2. Create a new EC2 instance with Ansible
Since your instance doesn't exist yet a dynamic inventory file (ec2.py) is useless. Try to instruct ansible to run on your local machine instead.
Create a new inventory file, e.g. new_hosts with the following contents:
[localhost]
127.0.0.1
Then your playbook, e.g. create_instance.yml should use a local connection and hosts: localhost. See an example below:
--- # Create ec2 instance playbook
- hosts: localhost
connection: local
gather_facts: false
vars_prompt:
inst_name: "What's the name of the instance?"
vars:
keypair: "your_keypair"
instance_type: "m1.small"
image: "ami-xxxyyyy"
group: "your_group"
region: "us-west-2"
tasks:
- name: make one instance
ec2: image={{ image }}
instance_type={{ instance_type }}
keypair={{ keypair }}
instance_tags='{"Name":"{{ inst_name }}"}'
region={{ region }}
group={{ group }}
wait=true
register: ec2_info
- name: Add instances to host group
add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2_info.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2_info.instances
This play will create an EC2 instance and it will register its public IP as an ansible host variable ec2hosts ie. as if you had defined it in the inventory file. This is useful if you want to provision the instance just created, just add a new play with hosts: ec2hosts.
Ultimately, launch ansible as follows:
export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=<your aws access key here>
export AWS_SECRET_KEY=<your aws secret key here>
ansible-playbook -i new_hosts create_instance.yml
The purpose of the environment variable ANSIBLE_HOST_KEY_CHECKING=false is to avoid being prompted to add the ssh host key when connecting to the instance.
Note: boto needs to be installed on the machine that runs the above ansible command.
3. Use ansible's ec2 dynamic inventory
EC2 dynamic inventory is comprised of 2 files, ec2.py and ec2.ini. In your particular case, I believe that your issue is due to the fact that ec2.py is unable to locate ec2.ini file.
To solve your issue, copy ec2.py and ec2.ini to the same folder in the machine where you intend to run ansible, e.g. to /etc/ansible/.
Pre Ansible 2.0 release (change the branch accordingly).
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/stabe-1.9/plugins/inventory/ec2.ini
chmod u+x ec2.py
For Ansible 2:
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
chmod u+x ec2.py
Configure ec2.ini and run ec2.py, which should print an ini formatted list of hosts to stdout.

Resources