SSH timeout when creating vagrant box with packer - vagrant

I use the following template to create vagrant box with packer. However, i get error "Build 'vmware-vmx' errored: Timeout waiting for SSH.". How to fix this?
{
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"ssh_wait_timeout": "30s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}],
"provisioners": [{
"type": "shell",
"inline": ["echo 'my additional provisioning steps'"]
}],
"post-processors": [{
"type": "vagrant",
"keep_input_artifact": true,
"output": "mycentos.box"
}]
}

Set headless parameter of the builder to false. Start the build and watch out for an error. If no error occurs then increase the timeout parameter. 30s is a bit small for instantiating, cloning and running the vm.
in your case:
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"headless" : false,
"ssh_wait_timeout": "1000s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}]

When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "vmware-vmx",
"communicator": "none"
}
]
}
Packer Builders DOCU vmware-vmx

Related

Packer provisioners don't save installed packages

I have encountered an issue during provisioning with HashiCorp Packer for virtualbox-iso on Alpine Linux v3.16.
Provisioning script runs OK, and it logs that build has finished, however when I open the outputted ovf file in VirtualBox moved files and docker are not present.
I would be grateful for any advice.
I run packer build packer-virtualbox-alpine-governator.json
packer-virtualbox-alpine-governator.json file:
{
"variables": {
"password": "packer"
},
"builders": [
{
"type": "virtualbox-iso",
"memory": 8192,
"guest_os_type": "Other_64",
"iso_url": "https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-standard-3.16.0-x86_64.iso",
"iso_checksum": "file:https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-standard-3.16.0-x86_64.iso.sha256",
"ssh_username": "root",
"ssh_password": "{{user `password`}}",
"shutdown_command": "poweroff",
"hard_drive_interface": "sata",
"boot_command": [
"root<enter><wait>",
"setup-alpine<enter><wait>us<enter><wait>us<enter><wait><enter><wait><enter><wait><enter><wait><enter><wait5>{{user `password`}}<enter><wait>{{user `password`}}<enter><wait><enter><wait><enter><wait><enter><wait15><enter><wait>openssh<enter><wait>openssh-full<enter><wait5>test123<enter><wait5>test123<enter><wait><enter><wait><enter><wait>sda<enter><wait>sys<enter><wait>y<enter><wait30>",
"echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config<enter><wait>",
"/etc/init.d/sshd restart<enter><wait5>"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": ["mkdir -p /opt/site/governator"]
},
{
"type": "file",
"source": "files/docker-compose.yaml",
"destination": "/opt/site/"
},
{
"type": "file",
"source": "files/governator.conf",
"destination": "/opt/site/governator/"
},
{
"type": "shell",
"scripts": [
"scripts/alpine/install-docker-on-alpine.sh"
]
}
]
}
./scritps/alpine/install-docker-on-alpine.sh
#! /bin/ash
cat > /etc/apk/repositories << EOF; $(echo)
https://dl-cdn.alpinelinux.org/alpine/v$(cut -d'.' -f1,2 /etc/alpine-release)/main/
https://dl-cdn.alpinelinux.org/alpine/v$(cut -d'.' -f1,2 /etc/alpine-release)/community/
https://dl-cdn.alpinelinux.org/alpine/edge/testing/
EOF
apk update
apk add docker
addgroup $USER docker
rc-update add docker boot
service docker start
apk add docker-compose
sync

Why does my SystemD service not restart on reboots?

I am building an AMI using Packer, which has a custom SystemD unit configured. The AMI is then deployed to an EC2. The issue is that if the EC2 reboots, then the unit is not restarted.
Here is my unit:
[Unit]
Description=My service
After=network.target
StartLimitIntervalSec=0
StartLimitAction=reboot
[Service]
Type=simple
Restart=always
RestartSec=30
User=ubuntu
ExecStart=/opt/app/app
[Install]
WantedBy=multi-user.target
And here is my Packer configuration:
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"region": "{{env `AWS_REGION`}}"
},
"builders": [
{
"access_key": "{{user `aws_access_key`}}",
"ami_name": "my-app-{{timestamp}}",
"instance_type": "t2.micro",
"region": "{{user `region`}}",
"secret_key": "{{user `aws_secret_key`}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/*ubuntu-bionic-18.04-amd64-server-*",
"root-device-type": "ebs"
},
"owners": [ "099720109477" ],
"most_recent": true
},
"ssh_username": "ubuntu",
"type": "amazon-ebs"
}
],
"provisioners": [
{
"type": "shell",
"script": "{{template_dir}}/provision.sh"
},
{
"type": "file",
"source": "{{template_dir}}/files/app.service",
"destination": "/tmp/upload/etc/systemd/system/app.service"
},
{
"type": "file",
"source": "{{template_dir}}/../bin/Release/netcoreapp3.1/",
"destination": "/tmp/upload/opt/app"
},
{
"type": "shell",
"inline": [
"sudo rsync -a /tmp/upload/ /",
"cd /opt/app",
"sudo systemctl daemon-reload",
"sudo systemctl enable app.service"
]
}
]
}
Strangely, if I SSH into the running EC2 and enable the service, then it does restart after a reboot.
sudo systemctl enable app.service
sudo reboot
This makes me think I am not creating the AMI correctly, but in my Packer configuration I do enable the service!
Why does my AMI not have my SystemD unit enabled?

Vagrant and vmware_desktop

I build a custom box with packer:
"builders": [{
"type": "vmware-iso",
"iso_urls": "http://mirror.vtti.vt.edu/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1611.iso",
"iso_checksum_type": "sha256",
"iso_checksum": "27bd866242ee058b7a5754e83d8ee8403e216b93d130d800852a96f41c34d86a",
"boot_wait": "10s",
"disk_size": 81920,
"output_directory": "/home/aida/vmware-packer/",
"guest_os_type": "redhat",
"headless": true,
"http_directory": "http",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo 'vagrant'|sudo -S /sbin/halt -h -p",
"vm_name": "packer-centos-7-x86_64",
"vmx_data": {
"memsize": "4096",
"numvcpus": "2"
},
"boot_command" : [
"<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg<enter><wait>"
]
}],
"provisioners": [
{
"type": "shell",
"scripts": [
"scripts/vagrant.sh",
"scripts/vmware.sh",
"scripts/vagrant.sh",
"scripts/sshd.sh",
"scripts/cleanup.sh"
],
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E bash '{{.Path}}'"
}
],
"post-processors": [{
"output": "builds/{{.Provider}}-centos7.box",
"type": "vagrant"
}]
}
Then, I add this to the vagrant box. Now, I'm trying to use vagrant up, but I received this error:
The provider 'vmware_desktop' could not be found, but was requested to back the
machine 'default'. Please use a provider that exists.
I tried to add a vmware workstation plugin but I faced whith another error which is I need a specefic licence. (I have a work station pro licence)
So, do you have any idea what should I do?
(Just repeating what Frédéric Henri already pointed out, since it's the correct answer.)
You need a Vagrant VMware plugin license to use VMware with Vagrant, see
Vagrant + VMware.

Vagrant Private Boxfile generated via Packer/Atlas is 404 when accessed as logged in user

I'm generating an AMI image, and passing that through to a vmware_fusion vagrant.box post-processor. This completes successfully, and the vagrant box page claims that the box is accessible and available. Using the instructions provided on the box file page to init a new project with the box result in...
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
The requested URL returned error: 404 Not Found
When copy/pasting the 404'd URL into a browser, I also get the Atlas 404 page.
I have verified that I am logged in via vagrant login at the console and I am logged in to the Atlas site, so the 404 is not a result of the box being private and myself not being logged in.
I have run other box builds, and they did successfully download at this stage. It kind of seems like Packer/Atlas is bugged right now, but I have no way to verify that.
Here's what my Packer config looks like:
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS`}}",
"aws_secret_key": "{{env `AWS_SECRET`}}"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"ami_name": "ami_name_here {{timestamp}}",
"instance_type": "t2.medium",
"region": "us-east-1",
"source_ami": "ami-df38e6b4",
"user_data_file": "ec2-setup.sh"
}
],
"provisioners": [
{
"type": "shell",
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E bash '{{.Path}}'",
"script": "packer_scripts/setup.sh"
},
{
"type": "shell",
"inline": [
"sleep 30",
"cd /tmp && sudo wget https://apt.puppetlabs.com/puppetlabs-release-pc1-trusty.deb",
"sudo dpkg -i /tmp/puppetlabs-release-pc1-trusty.deb",
"sudo apt-get update && sudo apt-get upgrade -y",
"sudo apt-get install puppet -y"
]
},
{
"type": "puppet-masterless",
"manifest_file": "manifests/default.pp",
"module_paths": [
"modules/"
]
}
],
"post-processors": [
[
{
"type": "atlas",
"artifact": "my/artifact",
"artifact_type": "amazon.ami",
"metadata": {
"created_at": "{{timestamp}}"
}
},
{
"type": "atlas",
"artifact": "my/artifact",
"artifact_type": "vagrant.box",
"metadata": {
"created_at": "{{timestamp}}",
"provider": "vmware_fusion"
}
}
]
],
"push": {
"name": "my/artifact",
"vcs": true
}
}
After more digging I found more complete documentation about how the Packer/Atlas process works. It would seem that Atlas cannot accept an AMI image built from a Packer Builder and convert that into a VM image for other platforms (VMware, Virtualbox). Which is unfortunate, since my builds complete much more quickly on my own EC2 instance.
If I'm incorrect here I'd love to know how it can be done. If I find a way, I'll be back to update.

Packer build fails due to tty needed for sudo

My packer build is failing with the following message:
sudo: sorry, you must have a tty to run sudo.
My host is Windows 8 with vagrant and virtualbox, my guest is centos7.
On researching it is my understanding that not requiring tty for sudo is the reason for the message. But I have the following in ks.cfg:
sed -i 's/^.*requiretty/#Defaults requiretty/' /etc/sudoers
Could the issue be that there's something I need to set on the windows vagrant ssh side so that a psuedo-tty is created?
This is my first go at packer.
I am using a packer build that I downloaded.
packer.json below:
{
"variables": {
"version": "{{env `VERSION`}}"
},
"provisioners": [
{
"type": "shell",
"execute_command": "sudo {{.Vars}} sh {{.Path}}",
"scripts": [
"scripts/vagrant.sh",
"scripts/vmtools.sh",
"scripts/cleanup.sh",
"scripts/zerodisk.sh"
]
}
],
"post-processors": [
{
"type": "vagrant",
"output": "INSANEWORKS-CentOS-7.0-x86_64-{{user `version`}}-{{.Provider}}.box"
}
],
"builders": [
{
"type": "virtualbox-iso",
"iso_url": "http://ftp.iij.ad.jp/pub/linux/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1503.iso",
"iso_checksum": "498bb78789ddc7973fe14358822eb1b48521bbaca91c17bd132c7f8c903d79b3",
"iso_checksum_type": "sha256",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_wait_timeout": "45m",
"ssh_disable_agent": "true",
"boot_command": [
"<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg<enter><wait>"
],
"disk_size": "40000",
"hard_drive_interface": "sata",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"guest_additions_sha256": "7b61f523db7ba75aebc4c7bb0cae2da92674fa72299e4a006c5c67517f7d786b",
"guest_os_type": "RedHat_64",
"headless": "true",
"http_directory": "http",
"shutdown_command": "sudo /sbin/halt -p",
"vboxmanage": [
[ "modifyvm", "{{.Name}}", "--memory", "1024" ],
[ "modifyvm", "{{.Name}}", "--cpus", "1" ]
]
}
]
}
Thanks in advance.
You have to enable a PTY in your ssh connection. Add in your builders section following configuration item:
"ssh_pty" : "true"
See also https://packer.io/docs/templates/communicator.html#ssh_pty
Your "execute_command" in provisioner section should be "execute_command" : "echo 'vagrant' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'"
For a similar error message - 'sudo: no tty present and no askpass program specified' - I found the solution in this article:
http://blog.endpoint.com/2014/03/provisioning-development-environment_14.html
In addition to adding "ssh_pty" : "true" in the builder section, add the following provisioner:
{
"type": "shell",
"execute_command": "echo '{{user `ssh_pass`}}' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'",
"inline": [
"echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers"
]
}
Stack:
Host - Mac
Packer builder type - virtualbox-iso
(using vagrant)

Resources