Enable EPEL with cloud-init via "User Data" (Amazon Linux) - amazon-ec2

I'm trying to install the p7zip package after launching an Amazon Linux-based EC2 instance in AWS via the "User Data" feature (using cloud-init):
#cloud-config
repo_update: true
repo_upgrade: all
packages:
- p7zip
However since p7zip isn't available in the normal repos and requires EPEL to be enabled, it does not appear to be fetching the package properly.
My question is: using cloud-init, how do I enable EPEL before fetching packages when initialising the EC2 instance?

#cloud-config
# vim: syntax=yaml
#
# Add yum repository configuration to the system
#
# The following example adds the file /etc/yum.repos.d/epel_testing.repo
# which can then subsequently be used by yum for later operations.
yum_repos:
# The name of the repository
epel-testing:
# Any repository configuration options
# See: man yum.conf
#
# This one is required!
baseurl: http://download.fedoraproject.org/pub/epel/testing/5/$basearch
enabled: false
failovermethod: priority
gpgcheck: true
gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL
name: Extra Packages for Enterprise Linux 5 - Testing

For more recent versions of Amazon Linux, you need to add the following to cloud-config file:
yum_repos:
epel_custom:
name: Extra Packages for Enterprise Linux 6 - $basearch
baseurl: http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist: https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod: priority
enabled: true
gpgcheck: true
gpgkey: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
Here is an example of a working cloud-config file that can be used at boot as userdata.

The following sections will enable EPEL with GPG. Note that the key is imported at initial boot.
#cloud-config
bootcmd:
- [ cloud-init-per, once, gpg-key-epel, rpm, "--import", "https://archive.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7" ]
yum_repos:
epel:
name: EPEL
mirrorlist: https://mirrors.fedoraproject.org/mirrorlist?repo=epel-7&arch=$basearch
enabled: true
gpgcheck: true
repo_update: true
repo_upgrade: all
From https://github.com/trajano/terraform-docker-swarm-aws/blob/master/common.cloud-config

Related

Run Ansible playbook from Cloud-Init

I have been learning Cloud-Init for several days to do an automatic deployment. To achieve this, and apply certain configurations, I am using Ansible playbooks. The problem that I have found is that I am not able to make the playbook run directly on the operating system that is being installed.
I leave you the user-data file that I am using.
#cloud-config
autoinstall:
version: 1
identity:
hostname: hostname
password: "$6$cOciYeIErEet80Rv$YX8qt6vizXgcUkgIPSKD1qNZNxe77tSWOY3k/0.i8D8EpApaGNuyucxJvONmZiRj4rVM3L6EE4sLKcnzYVcMj/ "
username: ubuntu
storage:
layout:
name: direct
locale: es_ES
timezone: "Europe/Madrid"
keyboard:
layout: es
packages:
- sshpass
- ansible
- git
late-commands:
- git clone https://github.com/MarcOrfilaCarreras/dotfiles /target/root/dotfiles
- ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -u ubuntu -e "ansible_password=ubuntu" -e "ansible_become_pass=ubuntu"
PS: I am using Ubuntu Server 22.04, the Ansible command is temporary and only for testing and I know that I have to change the identity fields.
If you want to configure localhost, it's better to use local transport (which is -c local in command line).
Basically, change ansible call to:
ansible-playbook -i inventory-test /root/dotfiles/ansible/playbooks/docker.yml -c local
This will bypass all SSH things and run locally.

Cant accept license while running kitchen test

I run kitchen test, cant accept the license, answer yes does nothing
kitchen.yml
provisioner:
name: chef_zero
always_update_cookbooks: true
retry_on_exit_code:
- 35 # 35 is the exit code signaling that the node is rebooting
max_retries: 1
client_rb:
exit_status: :enabled # Opt-in to the standardized exit codes
client_fork: false # Forked instances don't return the real exit code
environment: _default
chef_license: accept
product_name: chef
chef-client: 14
Try to update from accept to accept-no-persist in the provisioner section per the github issue: https://github.com/test-kitchen/test-kitchen/issues/1553
provisioner:
name: chef_zero
always_update_cookbooks: true
log_level: info
chef_license: accept-no-persist
product_name: chef
product_version: 14
note (via tas50 comment on the github issue): Test Kitchen 1.x does not support chef licensing. The plan is to backport the work on the 2.x branch to the 1-stable branch and release a new 1.X release with the support.

My Vagrant machine from chef kitchen cannot access the internet

I am trying to learn how to do local development of chef recipes. I am following this guide https://gist.github.com/smford22/f00f46471047422bd8a7
I am prefixing all the kitchen commands with chef exec because if I try to run kitchen directly, I get all sorts of ruby/gem errors.
When I run chef exec kitchen converge it gets stuck on installing the Chef Omnibus, hanging on "Trying wget..."
If I login to the VM and try to run curl and wget commands like curl https://google.com it indeed cannot access the internet.
chef exec kitchen -v
Test Kitchen version 1.23.2
chef -v
Chef Development Kit Version: 3.5.13
chef-client version: 14.7.17
delivery version: master (6862f27aba89109a9630f0b6c6798efec56b4efe)
berks version: 7.0.6
kitchen version: 1.23.2
inspec version: 3.0.52
.kitchen.yml:
---
driver:
name: vagrant
## The private_network feature lets you setup a private network on the VM guest
## via localhost on the host.
## see also: https://www.vagrantup.com/docs/networking/private_network.html
# network:
# - ["private_network", {ip: "33.33.33.33"}]
provisioner:
name: chef_zero
## The verifier section determines which test platform you want to use.
verifier:
name: inspec
format: doc
platforms:
- name: centos-6.7
suites:
- name: default
run_list:
- recipe[chef_httpd::default]
attributes:
Are you accessing the internet through proxy?
If yes, you need to configure the same for your Vagrant VMs using vagrant-proxyconf plugin.
Documentation: http://tmatilai.github.io/vagrant-proxyconf/

Renaming the Windows guest failed. Most often this is because you've specified a FQDN instead of just a host name

I am running vagrant version,
vagrant -v
Vagrant 1.9.3
vagrant plugin list
vagrant-butcher (2.2.1)
vagrant-cachier (1.2.1)
vagrant-omnibus (1.5.0)
vagrant-share (1.1.7, system)
vagrant-vbguest (0.13.0)
When I start a vagrant VM, windows 2012r2,
I get "Renaming the Windows guest failed. Most often this is because you've specified a FQDN instead of just a host name."
It used to work before on the same host(centos7, with Virtualbox) with version 1.4.
If you are (like me) experiencing this with Kitchen, in your .kitchen.yml, in the platform section, you can't have "name: mwrock/Windows2012R2". Instead, name it something like "windows2012R2" and, in platform's "driver_config" section, specify "box: mwrock/Windows2012R2".
Another way you can resolve this issue is by setting the vm_hostname attribute to false like this:
platforms:
- name: BPA-TEST
driver_config:
username: Tester
password: [PASSWORD]
vm_hostname: false
driver:
port: 55985
customize:
memory: 4048
https://github.com/test-kitchen/kitchen-vagrant
vm_hostname Sets the internal hostname for the instance. This is not used when connecting to the Vagrant virtual machine.
To prevent this value from being rendered in the default Vagrantfile,
you can set this value to false.
The default will be computed from the name of the instance. For
example, the instance was called "default-fuzz-9" will produce a
default vm_hostname value of "default-fuzz-9". For Windows-based
platforms, a default of nil is used to save on boot time and potential
rebooting.

Chef Vault with Test-Kitchen, Vagrant and Chef-Zero provisioner

I have an environment setup with Test-Kitchen v1.5.0, Vagrant v1.8.1. I have a recipe that uses chef vault to decrypt our encrypted passwords that our in our data_bags_path/passwords/pilot.json file.
I am using the solution here https://github.com/chef/chef-vault/issues/58 that daxgames provides towards the end of the page.
My .kitchen.yml:
---
driver:
name: vagrant
provisioner:
name: chef_zero
require_chef_omnibus: 12.14.77
roles_path: ../../roles
environments_path: ../../environments
data_bags_path: ../../data_bags
client_rb:
environment: lgrid2-dev
node_name: "ltylapp400a"
client_key: "/etc/chef/ltylapp400a.pem"
platforms:
- name: centos-6.8
driver:
synced_folders:
- ["/Users/212466756/.chef", "/etc/chef", "disabled:false"]
suites:
- name: ltylapp400a
run_list:
- role[lgrid-db]
attributes:
chef_client:
A snippet from my recipe that deals with chef-vault:
case node["customer_conf"]["status"]
when 'pilot'
passwords = ChefVault::Item.load('passwords', 'pilot')
when 'production'
passwords = ChefVault::Item.load('passwords', node[:hostname][1..3])
end
My directory structure for relevant data_bags:
data_bags
--passwords
--pilot.json
--pilot_keys.json
The error I am getting is that my client.pem that vagrant generates at /etc/chef/ltylapp400a.pem can not decrypt the contents of that databag. Chef suggest that I run knife vault refresh, I am not connected to my chef server on my local machine so if I run this it will give an error about not having a chef server to connect to. My question is how I can add my new key that vagrant generated to the pilot_keys.json so that it is able to decrypt that data_bag?
The more detailed answers are better I am still somewhat new to chef, test-kitchen, etc...
I was able to get this working, below are my results and conclusions. As I stated above my issue was I was unable to decrypt the data_bag since I could not add the new key that vagrant created to the pilot_key.json file since I was not connected to the chef server and could not run a knife vault refresh/update. What I had to do was get the client.pem key from a server that already had access to the pilot.json data_bag. I used our utility server key since it will not be destroyed in the near future.
So on my local PC I have a .chef/ directory under my home directory, I have the client.pem key I copied from the utility server and I sync this with the /tmp/kitchen/ which acts as the /etc/chef directory in the test-kitchen environment.
---
driver:
name: vagrant
provisioner:
name: chef_zero
require_chef_omnibus: 12.14.77
roles_path: ../../roles
environments_path: ../../environments
data_bags_path: ../../data_bags
client_rb:
node_name: "utilityServer"
client_key: "/tmp/kitchen/client.pem" #The Chef::Vault needs a client.pem file to authenticate back to the data_bag to decrypt it, this needs to be stored at /tmp/kitchen/client.pem
environment: dev
no_proxy: 10.0.2.2
platforms:
- name: centos-6.8
driver:
synced_folders:
- ["~/.chef","/tmp/kitchen/","disabled:false"] # Allows the vagrant box to have access to your .chef directory in your home directory. This is where you will store the client.pem for authentication.
suites:
- name: lzzzdbx400a
run_list:
- role[lgrid-db]
attributes:
The data_bags/passwords/pilot_key.json looks like this:
{
"id": "pilot_keys",
"admins": [
"utilityServer"
],
"clients": [
"webserver",
"database"
],
"search_query":"*:*"
"utilityServer":"key",
"webserver":"key",
"database": "key"
}
Since the utilityServer key was already able to decrypt the passwords/pilot data_bag it ran through fine during the next time I ran kitchen converge.
During previously struggles with Kitchen and chef-vault I used the synced_folders method to access key. Revisited this topic I found another solution.
Kitchen Support
To make this work in kitchen, just put a cleartext
data bag in the data_bags folder that your kitchen run refers to
(probably in test/integration/data_bags). Then the vault commands fall
back into using that dummy data when you use chef_vault_item to
retrieve it.
reference: http://hedge-ops.com/chef-vault-tutorial/

Resources