I have a git lab runner whose job it is to:
a) Create IaC using Terraform. (7 ec2 instances with a defined keypair)
b) Run an ansible playbook that will need to SSH to all 7 instances and configure Kafka.
At the moment, I have automated part a. I then ssh to one of the instances using a private key. copy the ansible code and the private key to the instance and then execute the following cmd to run the ansible:
ansible-playbook --private-key=/home/ec2-user/keyname.pem hosts.yml all.yml
This all works fine but obviously, I want to automate the running of the ansible in a gitlab runner without having to store my private key on the docker container or in the git repo.
I have briefly investigated SSM but don't really understand how that all works.
Note: I need the key for two purposes.
ssh into the first instance
referenced in the host.yml so that the ansible playbook can connect to all other instances
Thanks in advance everyone.
Cheers
Adam
Related
From reading the Connect to your Windows instance AWS EC2 docs page, my understanding is that it is not possible to SSH to Windows EC2 instances.
The typical procedure to connect to a Windows EC2 instance manually is to download the remote desktop file, get the password for the instance, and then use the Remote Desktop Connection tool to RDP to the instance (more detail is in the docs page above).
If I am correct that Windows EC2 instances do not support connecting via SSH, how can you connect to a Windows EC2 in an Ansible playbook?
I would prefer to be able to do this without installing any software on the Windows EC2 instance beforehand, but if that is necessary, I can do that.
I have found you need to do the following to connect to a Windows EC2 instance using Ansible:
You need to configure the EC2 to allow connections from Ansible using the ConfigureRemotingForAnsible.ps1 script. This can be done either by setting this as the user data when you create the EC2, or by running this script after the EC2 is created.
You need add a security group, or configure a security group already added to the EC2 to allow the following incoming requests to the EC2 from the host(s) that the Ansible playbook will be running on:
WinRM
TCP requests to whatever you configure as the Ansible port
You need to install pywinrm>=0.3.0 so Ansible can use WinRM to connect to the EC2.
You need to run the Ansible playbook with ansible_connection variable set to winrm, and the ansible_winrm_scheme variable set to http. This can be done with --extra-args or any other way that variables are set.
You need to provide the public IP address of the Windows EC2 host, either under hosts in the playbook, or in a host file passed to ansible-playbook with -i.
You need to get or set the EC2's Administrator password, and then provide this password with the ansible_password variable for the EC2.
I am unable to run an ansible-playbook or use ansible ping on a AWS instance. However, I can ssh into the instance with no problem. My hosts file is this:
[instance]
xx.xx.xxx.xxx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/home/josh/Ansible/Amazon/AWS.pem
Should I not use a direct path. I am trying to use ansible to install apache onto the server. In my security group in the AWS console, I allowed all incoming ssh traffic in port 22, and ansi
service: name=apache2 state=started`ble tries to ssh through port 22 so that should not be the problem. Is there some crucial idea behind sshing into instances that I didn't catch onto to. I tried following this post: Ansible AWS: Unable to connect to EC2 instance but to no avail.
make sure inside ansible.cfg ***
private_key_file = path of private key(server-private-key)
and in host machine don't change default authorized_keys file ,better way is create one user, for that user create .ssh directory and then inside create a file called authorized_keys & paste your server-public key
$~/.ssh/authorized_keys
try: ansible-playbook yourplaybookname.yml --connection=local
ansible defaults to ssh
I am able to run openwhisk in my local dev machine. I like to extend this to production environment. Is there any concept of openwhisk cluster?. I am not able to find good documentation on this. How auto-load balancing is achieved, etcc..
OpenWhisk is deployed via ansible and as such can be distributed across multiple VMs in a straightforward way.
Check the README on distributed deployments for further information and guidance.
Openwhisk will use ansible to deploy the openwhisk
I followed the follwoing way for my distributed setup
First ensure ssh passwrod less connectivity to all the servers
git clone https://github.com/apache/incubator-openwhisk.git
Add the remote_user and private_key_file values to the defaults section of
the ansible.cfg file. The remote_user value sets the default ssh user. The
private_key_file is required when using a private key that is not in the
default ~/.ssh folder
[defaults]
remote_user = ubuntu
private_key_file=/path/to/file.pem
Go to tools/ubuntu-setup run all.sh to install all the required softwares.
Now modify the inventory files(hosts) for your first node. this can become your bootstrapper VM
Check if you are able to ping the hosts : ansible all -i environments/distributed/hosts -m ping
if ping is fine run the next commad to generate the config files: ansible-playbook -i environments/distributed/hosts setup.yml
For installing the pre requisites: ansible-playbook -i environments/distributed prereq_build.yml
Deploy registry: ansible-playbook -i environments/distributed registry.yml
Go to openwhisk home run the following command to build the Openwhisk
./gradlew distDocker -PdockerHost=:4243 -PdockerRegistry=:5000
Once the build is successful run the following commands from the ansible folder
ansible-playbook -i environments/distributed/hosts couchdb.yml
ansible-playbook -i environments/distributed/hosts initdb.yml
ansible-playbook -i environments/distributed/hosts wipe.yml
ansible-playbook -i environments/distributed/hosts openwhisk.yml
ansible-playbook -i environments/distributed/hosts postdeploy.yml
Now edit the host file for other hosts and repeat the steps 7-8 and 12
this will create the setup in all the nodes. once done, you can use a node balancer to load balance on that. for sync between db instances i m using couchdb continuous replication
I am trying to execute some bash script on EC2 instance using boto. Boto provides a way SSH to EC2 instance on public IP but in my case the instances have only private IP. The way SSH is done on these instance is using a host which can SSH on all the instance using private IP (Bastion host).
Following is the script to connect to instance on public IP:
s3_client = boto3.client('s3')
s3_client.download_file('mybucket','key/mykey.pem', '/tmp/mykey.pem')
k = paramiko.RSAKey.from_private_key_file("/tmp/mykey.pem")
c = paramiko.SSHClient()
c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
host=event
print "Connecting to " + host
c.connect( hostname = host, username = "ec2-user", pkey = k )
How to connect to instances if host have private IP instead of public key if we want to connect through bastion host with public IP P.P.P.P
If your requirement is to trigger execution of some code on an Amazon EC2 instance, then it would be better to use the Amazon EC2 Run Command rather than try to automate an SSH connection.
Amazon EC2 Run Command provides a simple way of automating common administrative tasks like executing Shell scripts and commands on Linux, running PowerShell commands on Windows, installing software or patches, and more. Amazon EC2 Run Command allows you to execute these commands across multiple instances and provides visibility into the results, making it easy to manage configuration change across fleets of instances.
Your instances would need the Amazon EC2 Systems Manager (SSM) agent installed. See: Installing SSM Agent
You would then run commands on Amazon EC2 instances from the management console, AWS Command-Line Interface (CLI) or via an API call.
The send command does not accept tags as input. However, you could first perform a list-instances command to search for instances by tag, then pass the instance-ids to the send command. See: AWS CLI send-command
I am new to ansible - I am using ansible to add the instances created by ELB ( my AWS will create instances for ELB) to ansible hosts file and access the instances from ansible server. From a linux machine, i use jumpbox and .pem key to access the ec2instance. How will I do in ansible ?
You should be able to pass in the flag --private-key=. You will probably also want to use -u ec2user to instruct Ansible to login as the correct user.