Ansible connects via WinRM but hangs on first step - ansible

I am trying to use Packer, with Ansible as a provisioner, to build a Windows AMI.
$ packer --version
1.0.3
$ ansible --version
ansible 2.2.0.0
Ansible seems to connect successfully, but then hangs at the first step in the playbook, downloading 7zip. Below are my Packer template and a sample of the Ansible playook.
Packer
{
"builders": [{
"type": "amazon-ebs",
"region": "us-west-2",
"source_ami": "ami-09f47d69",
"instance_type": "m4.large",
"ami_name": "Packer windows test",
"user_data_file": "./scripts/ec2bootstrap.ps1",
"communicator": "winrm",
"winrm_username": "Administrator"
}],
"provisioners": [
{
"type": "powershell",
"scripts": [
"./scripts/ec2config.ps1",
"./scripts/bundleconfig.ps1"
]
},
{
"type": "ansible",
"playbook_file": "../ansible/base_ami_site.yml",
"extra_arguments": [
"--connection", "packer",
"--extra-vars", "ansible_shell_type=powershell ansible_shell_executable=None -vvvv"
]
}]}
Ansible sample
- name: Download 7-Zip Installer
win_get_url:
url: http://www.7-zip.org/a/7z1604-x64.msi
dest: C:\Users\Administrator\Downloads\7-zip.msi
force: no
Just to reiterate, it does connect, but nothing runs.

Turns out packer version 1.0.3 was preventing ansible from successfully being run.

Related

Baking AMIs in Spinnaker Pipeline using Chef-Solo

I'm trying to build a test pipeline in Spinnaker to bake an AMI and then update a CloudFormation template to deploy the AMI to EC2 instances in an auto scaling group.
I have a small test Chef cookbook developed which works great when running packer locally. I am running berks local on my laptop to vendor my cookbooks and pull them from our internal Chef supermarket. Packer is configured with the chef-solo provisioner as shown in the sample packer template below, and will transfer the cookbooks to the packer builder EC2 instance and run Chef. Right now, we're testing with Linux but want to support both Linux and Windows AMIs.
Is it possible to use chef-solo with a custom packer template with Spinnaker? If so, when and where should berks run to vendor the cookbooks before packer executes?
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"aws_region": "{{env `AWS_REGION`}}",
"ssh_private_key_file": "{{env `SSH_PRIVATE_KEY_FILE`}}",
"subnet_id": "{{env `AWS_SUBNET_ID`}}",
"vpc_id": "{{env `AWS_DEFAULT_VPC_ID`}}"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `aws_region`}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "amzn2-ami-hvm-*-x86_64-gp2",
"root-device-type": "ebs"
},
"most_recent": true,
"owners": [
"amazon"
]
},
"ami_name": "test-ami-{{timestamp}}",
"ami_description": "Test Linux AMI",
"communicator": "ssh",
"instance_type": "m4.large",
"subnet_id": "{{user `subnet_id`}}",
"tags": {
"Name": "Test Linux AMI"
},
"ssh_username": "ec2-user",
"ssh_keypair_name": "TestKeypair",
"ssh_private_key_file": "{{user `ssh_private_key_file`}}",
"vpc_id": "{{user `vpc_id`}}"
}
],
"provisioners": [
{
"type": "shell-local",
"command": "berks vendor --delete -b ./Berksfile ./cookbooks"
},
{
"type": "chef-solo",
"cookbook_paths": [
"./cookbooks"
],
"run_list": [
"recipe[test_cookbook]"
]
}
}

Vagrant and vmware_desktop

I build a custom box with packer:
"builders": [{
"type": "vmware-iso",
"iso_urls": "http://mirror.vtti.vt.edu/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1611.iso",
"iso_checksum_type": "sha256",
"iso_checksum": "27bd866242ee058b7a5754e83d8ee8403e216b93d130d800852a96f41c34d86a",
"boot_wait": "10s",
"disk_size": 81920,
"output_directory": "/home/aida/vmware-packer/",
"guest_os_type": "redhat",
"headless": true,
"http_directory": "http",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo 'vagrant'|sudo -S /sbin/halt -h -p",
"vm_name": "packer-centos-7-x86_64",
"vmx_data": {
"memsize": "4096",
"numvcpus": "2"
},
"boot_command" : [
"<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg<enter><wait>"
]
}],
"provisioners": [
{
"type": "shell",
"scripts": [
"scripts/vagrant.sh",
"scripts/vmware.sh",
"scripts/vagrant.sh",
"scripts/sshd.sh",
"scripts/cleanup.sh"
],
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E bash '{{.Path}}'"
}
],
"post-processors": [{
"output": "builds/{{.Provider}}-centos7.box",
"type": "vagrant"
}]
}
Then, I add this to the vagrant box. Now, I'm trying to use vagrant up, but I received this error:
The provider 'vmware_desktop' could not be found, but was requested to back the
machine 'default'. Please use a provider that exists.
I tried to add a vmware workstation plugin but I faced whith another error which is I need a specefic licence. (I have a work station pro licence)
So, do you have any idea what should I do?
(Just repeating what Frédéric Henri already pointed out, since it's the correct answer.)
You need a Vagrant VMware plugin license to use VMware with Vagrant, see
Vagrant + VMware.

Vagrant Private Boxfile generated via Packer/Atlas is 404 when accessed as logged in user

I'm generating an AMI image, and passing that through to a vmware_fusion vagrant.box post-processor. This completes successfully, and the vagrant box page claims that the box is accessible and available. Using the instructions provided on the box file page to init a new project with the box result in...
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
The requested URL returned error: 404 Not Found
When copy/pasting the 404'd URL into a browser, I also get the Atlas 404 page.
I have verified that I am logged in via vagrant login at the console and I am logged in to the Atlas site, so the 404 is not a result of the box being private and myself not being logged in.
I have run other box builds, and they did successfully download at this stage. It kind of seems like Packer/Atlas is bugged right now, but I have no way to verify that.
Here's what my Packer config looks like:
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS`}}",
"aws_secret_key": "{{env `AWS_SECRET`}}"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"ami_name": "ami_name_here {{timestamp}}",
"instance_type": "t2.medium",
"region": "us-east-1",
"source_ami": "ami-df38e6b4",
"user_data_file": "ec2-setup.sh"
}
],
"provisioners": [
{
"type": "shell",
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E bash '{{.Path}}'",
"script": "packer_scripts/setup.sh"
},
{
"type": "shell",
"inline": [
"sleep 30",
"cd /tmp && sudo wget https://apt.puppetlabs.com/puppetlabs-release-pc1-trusty.deb",
"sudo dpkg -i /tmp/puppetlabs-release-pc1-trusty.deb",
"sudo apt-get update && sudo apt-get upgrade -y",
"sudo apt-get install puppet -y"
]
},
{
"type": "puppet-masterless",
"manifest_file": "manifests/default.pp",
"module_paths": [
"modules/"
]
}
],
"post-processors": [
[
{
"type": "atlas",
"artifact": "my/artifact",
"artifact_type": "amazon.ami",
"metadata": {
"created_at": "{{timestamp}}"
}
},
{
"type": "atlas",
"artifact": "my/artifact",
"artifact_type": "vagrant.box",
"metadata": {
"created_at": "{{timestamp}}",
"provider": "vmware_fusion"
}
}
]
],
"push": {
"name": "my/artifact",
"vcs": true
}
}
After more digging I found more complete documentation about how the Packer/Atlas process works. It would seem that Atlas cannot accept an AMI image built from a Packer Builder and convert that into a VM image for other platforms (VMware, Virtualbox). Which is unfortunate, since my builds complete much more quickly on my own EC2 instance.
If I'm incorrect here I'd love to know how it can be done. If I find a way, I'll be back to update.

SSH timeout when creating vagrant box with packer

I use the following template to create vagrant box with packer. However, i get error "Build 'vmware-vmx' errored: Timeout waiting for SSH.". How to fix this?
{
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"ssh_wait_timeout": "30s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}],
"provisioners": [{
"type": "shell",
"inline": ["echo 'my additional provisioning steps'"]
}],
"post-processors": [{
"type": "vagrant",
"keep_input_artifact": true,
"output": "mycentos.box"
}]
}
Set headless parameter of the builder to false. Start the build and watch out for an error. If no error occurs then increase the timeout parameter. 30s is a bit small for instantiating, cloning and running the vm.
in your case:
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"headless" : false,
"ssh_wait_timeout": "1000s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}]
When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "vmware-vmx",
"communicator": "none"
}
]
}
Packer Builders DOCU vmware-vmx

Sharing vagrant with config files

I just finished my Vagrant box.
It has Gemfire and some other things on it. The provision works great but it has several configuration files that need to go with it.
How can I share my box in a way that the those files are carried with it?
An example:
To start a Gemfire server you need some region configs, so I want my project regions to be there in order for it to start in a ready to develop state.
How can I do that?
Sounds like a feature from Packer. Package the your box with Packer with the Virtualbox builder. First my example and then I will explain the feature. I am using this packer.json to package my vagrant box:
{
"builders": [
{
"type": "virtualbox-ovf",
"source_path": "Redistribution.ovf",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_wait_timeout": "10000s",
"headless": true,
"guest_additions_mode": "disable",
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "1024"],
["modifyvm", "{{.Name}}", "--cpus", "2"]
],
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}
],
"provisioners": [
{
"type": "shell",
"script": "packer/putInsecureKey.sh"
}
],
"post-processors": [
{
"type": "vagrant",
"compression_level": 9,
"output": "vagrant-dockerdev.box"
}
]
}
I have installed Packer 0.8.1 and run this command on the exported Vagrant box:
packer build packer.json
The following parameters for the post-processor will put files inside the box which can be referenced by a Vagrantfile. See Packer Documentation.
include (array of strings) - Paths to files to include in the Vagrant box. These files will each be copied into the top level directory of the Vagrant box (regardless of their paths). They can then be used from the Vagrantfile.
vagrantfile_template (string) - Path to a template to use for the Vagrantfile that is packaged with the box.
Now back to my example:
"post-processors": [
{
"type": "vagrant",
"compression_level": 9,
"output": "vagrant-dockerdev.box"
"include": ["path/to/file1", "path/to/file2"]
"vagrantfile_template": "path/to/vagrant-template"
}
]

Resources