I'm trying to get KitchenCI to build test instances inside my Amazon VPC. I have this working, however when Vagrant goes to attempt to connect to the ec2 instance, it uses the instance's external (public) IP vs it's internal (VPC) IP. Is there any way to change this in Vagrant?
.kitchen.yml:
---
provisioner:
name: chef_solo
platforms:
- name: centos-6.5
driver:
name: vagrant
- name: amazon
driver:
name: ec2
image_id: ami-ed8e9284
flavor_id: t2.medium
aws_ssh_key_id: DevOps
ssh_key: /Users/djimenez/.ssh/devops_rsa.pub
availability_zone: us-east-1a
subnet_id: subnet-1903a976
require_chef_omnibus: true
iam_profile_name: atc
ebs_delete_on_termination: true
security_group_ids: sg-7461ae1b
suites:
<snip>
Looks like I needed to add the following to my .kitchen.yml:
driver:
name: ec2
interface: private
The docs say:
interface
The place from which to derive the hostname for communicating with the
instance. May be dns, public or private. If this is unset, the driver
will derive the hostname by failing back in the following order:
DNS Name
Public IP Address
Private IP Address
The default is unset.
Related
I have an ansible script that connects to my Vcenter and builds out a VM. This works great assuming the network it will be built on has DHCP enabled. I am building mostly Centos 7 VM's on a network that does not have DHCP enabled meaning static IP's. The VM gets built, but then I am stuck logging into each VM manually and assigning the IP.
How can I tell CentOS to use a specific IP?
I am familiar with kickstart, but not sure how to trigger the install to pickup a ks file. (I know I can create a custom ISO, but I dont want to create a custom ISO for each VM I built.)
I have tried using the following flags on ansible VMware_guest module, but no luck.
Any Suggestions??
vmware_guest:
network:
type: static
ip: 192.168.1.5
mask: 255.255.255.0
gateway: 192.168.1.1
Please try using netmask instead of mask.
vmware_guest:
network:
name: "{{ network_name }}"
type: static
ip: 192.168.1.5
netmask: 255.255.255.0
gateway: 192.168.1.1
I have ansible playbook for for GCE VM creation. This playbook launch GCE VM with public ip address.
How to launch my GCE vm without public ip address.?
I'm new to ansible.
I suppose you are using GCE Ansisble module for VM creation. According to GCE module documentation, the module has a parameter named external_ip. Set it to none if you don't want any public IP address assigned to your VM.
external_ip:
type of external ip, ephemeral by default; alternatively, a fixed gce ip or ip name can be given. Specify 'none' if no external ip is desired.
Example:
- gce:
instance_names: my-test-instance1
zone: us-central1-a
machine_type: n1-standard-1
external_ip: none
image: debian-8
state: present
service_account_email: "your-sa#your-project-name.iam.gserviceaccount.com"
credentials_file: "/path/to/your-key.json"
project_id: "your-project-name"
disk_size: 32
I am trying to use a NAT Instance rather than a NAT Gateway; I am also not using any Community AMIs for the NAT Instance configuration.
I am trying to do a yum update from my private but I am thrown the following error: Cannot find a valid baseurl for repo: amzn-main/latest
My AWS stack is as follows:
VPC: A VPC VPC1 with an Internet Gateway IGW1 attached.
Subnets: Two subnets - public in us-east-1a and private in us-east-1b.
Public subnet: Subnet1.1-1a has Route table [Public-IGW-1 with local and IGW1 - 0.0.0.0/0].
Private subnet: Subnet1.2-1b has Route table [Private-1 with local and NAT instance NAT EC2 1- 0.0.0.0/0].
Route tables:
Private-1 has routes local and NAT EC2 1 instance - 0.0.0.0/0.
Public-IGW-1 has routes local and IGW1 - 0.0.0.0/0.
Security groups: Subnet-1.1-1a-Public from us-east-1a in VPC1 has SSH MyIP and HTTP with anywhere.
Subnet1.1-1a-Private from us-east-1b (have to rename; else deceiving) in VPC1 has inbound 22 - anywhere.
Instances:
NAT EC2 1 lives in Subnet1.1-1a of VPC1 with Security group NAT SG inbound 80 - anywhere 22. Private instance has SG - 22 - anywhere. Public instance has SG - 22 - MyIP and 80 - anywhere.
I copied my keypair into the public instance with scp and ssh-ed into the private instance with ssh -i keypair ec2-user#private-ip-addr. When I do a sudo yum update the error canot find a valid baseurl is shown.
I have made sure that NACL is allowing all traffic.
I figured it. The NAT Instance and the Public Instance have to be in the same security groups.
I have an environment setup with Test-Kitchen v1.5.0, Vagrant v1.8.1. I have a recipe that uses chef vault to decrypt our encrypted passwords that our in our data_bags_path/passwords/pilot.json file.
I am using the solution here https://github.com/chef/chef-vault/issues/58 that daxgames provides towards the end of the page.
My .kitchen.yml:
---
driver:
name: vagrant
provisioner:
name: chef_zero
require_chef_omnibus: 12.14.77
roles_path: ../../roles
environments_path: ../../environments
data_bags_path: ../../data_bags
client_rb:
environment: lgrid2-dev
node_name: "ltylapp400a"
client_key: "/etc/chef/ltylapp400a.pem"
platforms:
- name: centos-6.8
driver:
synced_folders:
- ["/Users/212466756/.chef", "/etc/chef", "disabled:false"]
suites:
- name: ltylapp400a
run_list:
- role[lgrid-db]
attributes:
chef_client:
A snippet from my recipe that deals with chef-vault:
case node["customer_conf"]["status"]
when 'pilot'
passwords = ChefVault::Item.load('passwords', 'pilot')
when 'production'
passwords = ChefVault::Item.load('passwords', node[:hostname][1..3])
end
My directory structure for relevant data_bags:
data_bags
--passwords
--pilot.json
--pilot_keys.json
The error I am getting is that my client.pem that vagrant generates at /etc/chef/ltylapp400a.pem can not decrypt the contents of that databag. Chef suggest that I run knife vault refresh, I am not connected to my chef server on my local machine so if I run this it will give an error about not having a chef server to connect to. My question is how I can add my new key that vagrant generated to the pilot_keys.json so that it is able to decrypt that data_bag?
The more detailed answers are better I am still somewhat new to chef, test-kitchen, etc...
I was able to get this working, below are my results and conclusions. As I stated above my issue was I was unable to decrypt the data_bag since I could not add the new key that vagrant created to the pilot_key.json file since I was not connected to the chef server and could not run a knife vault refresh/update. What I had to do was get the client.pem key from a server that already had access to the pilot.json data_bag. I used our utility server key since it will not be destroyed in the near future.
So on my local PC I have a .chef/ directory under my home directory, I have the client.pem key I copied from the utility server and I sync this with the /tmp/kitchen/ which acts as the /etc/chef directory in the test-kitchen environment.
---
driver:
name: vagrant
provisioner:
name: chef_zero
require_chef_omnibus: 12.14.77
roles_path: ../../roles
environments_path: ../../environments
data_bags_path: ../../data_bags
client_rb:
node_name: "utilityServer"
client_key: "/tmp/kitchen/client.pem" #The Chef::Vault needs a client.pem file to authenticate back to the data_bag to decrypt it, this needs to be stored at /tmp/kitchen/client.pem
environment: dev
no_proxy: 10.0.2.2
platforms:
- name: centos-6.8
driver:
synced_folders:
- ["~/.chef","/tmp/kitchen/","disabled:false"] # Allows the vagrant box to have access to your .chef directory in your home directory. This is where you will store the client.pem for authentication.
suites:
- name: lzzzdbx400a
run_list:
- role[lgrid-db]
attributes:
The data_bags/passwords/pilot_key.json looks like this:
{
"id": "pilot_keys",
"admins": [
"utilityServer"
],
"clients": [
"webserver",
"database"
],
"search_query":"*:*"
"utilityServer":"key",
"webserver":"key",
"database": "key"
}
Since the utilityServer key was already able to decrypt the passwords/pilot data_bag it ran through fine during the next time I ran kitchen converge.
During previously struggles with Kitchen and chef-vault I used the synced_folders method to access key. Revisited this topic I found another solution.
Kitchen Support
To make this work in kitchen, just put a cleartext
data bag in the data_bags folder that your kitchen run refers to
(probably in test/integration/data_bags). Then the vault commands fall
back into using that dummy data when you use chef_vault_item to
retrieve it.
reference: http://hedge-ops.com/chef-vault-tutorial/
I have created my kitchen.yml in the following way:
---
driver:
name: vagrant
customize:
memory: 2048
driver_config:
require_chef_omnibus: true
use_vagrant_berkshelf_plugin: true
provisioner:
name: chef_zero
chef_omnibus_url: http://box-url/install.sh
platforms:
- name: prod.abc.com
driver:
box_url: http://abc.box
run_list:
- role[new_role]
suites:
- name: default
In the above kitchen.yml, I get the hostname of the machine as default-prodabccom. However, I want the hostname to be prod.abc.com
What changes should i made in my kitchen.yml to get the correct name?
Hostname of the Guest System
In order to define the hostname of the operating system running inside the VM (cf. /etc/hostname), use the vm_hostname option of the kitchen-vagrant driver:
platforms:
- name: prod.abc.com
driver_config:
vm_hostname: prod.abc.com
Name of the Test-Kitchen Suite/Platform
To rename the suite-platform combination shown in Converging <default-prodabccom>, you can only play with name of suite and platform, i.e., to get production-abccom. This name is computed here in test-kitchen and, e.g., all dots are stripped, which cannot simply be changed.
Nevertheless, if I understand it right that you want to change this name: it makes little sense to me. Don't change that.
Name of the VM in VirtualBox
The name of the VM (e.g. kitchen-default-prodabcom_..default_1234..) is derived here in kitchen-vagrant and cannot easily be changed.
I found this question because I was in the scenario where I am testing a number of kitchen enabled repos, each containing a number of platforms. e.g.
elasticsearch
centos-6
centos-7
java
centos-6
centos-7
and you can give those machines their own ip by virtualbox when they are spun up like so;
driver:
name: vagrant
network:
- ["private_network", { type: "dhcp" }]
This facilitates testing, if something has failed and then you can get to the box directly. And you can use the vagrant HostManager plugin to keep your /etc/hosts updated with the current ip address.
So you can go http://default-centos-74.vagrantup.com in a local browser to check that instance. You can also name your suites in such a way that it leads to unique names for each each, across repos, for example prefixing each like so;
suites:
- name: elasticsearch-default
and in other other .kitchen.yml
suites:
- name: java-default
which still leads to useful naming;
http://elasticsearch-default-centos-74.vagrantup.com
However whats happended recently is that chrome and firefox have started to enforce HSTS which makes trying to get to non-HTTPS local sites, mapped using /etc/hosts a PITA.
The main thing is to get rid of the vagrantup.com suffix. However that is hard coded in, and the only option for over writing it is in .kitchen.yml which is unfortunate, because that doesn't know the suite and platform at the point it generates the Vagrantfile, so it's not much use.
You can use chef/ansible to rename the box, but that is not very nice. The solution I came up with is like this;
you can set a custom Vagrantfile.erb in .kitchen.yml ;
---
driver:
name: vagrant
network:
- ["private_network", { type: "dhcp" }]
vagrantfile_erb: Vagrantfile.erb
Then copy that Vagrantfile.erb out of the gem on your local box into the root of your test-kitchen repo. Mine was at /home/user1/.gem/ruby/gems/kitchen-vagrant-1.3.0/templates/Vagrantfile.erb
And then you set arbitrary names to your boxes by changing it at line 36;
c.vm.hostname = "<%= #instance.name %>.<%=
config[:kitchen_root].split('/')[-1] %>.testbox"
or you can modify it like so, and allow over riding from the .kitchen.yml config
36c36
< c.vm.hostname = "<%= config[:vm_hostname] %>"
---
> c.vm.hostname = "<%= #instance.name %>.<%= config[:var_domain] ? config[:var_domain] : config[:kitchen_root].split('/')[-1] %>.<%= config[:var_suffix] ? config[:var_suffix] : "vagrantup.com" %>"
99d98
<
https://gist.github.com/tolland/fe01eb0f46d26850cc5c98e167578f7b
And then you set arbitrary names to your boxes by setting var_suffix and var_domain in .kitchen.yml
---
driver:
name: vagrant
network:
- ["private_network", { type: "dhcp" }]
vagrantfile_erb: Vagrantfile.erb
#var_domain: sometingsomething
var_suffix: testbox