My packer build is failing with the following message:
sudo: sorry, you must have a tty to run sudo.
My host is Windows 8 with vagrant and virtualbox, my guest is centos7.
On researching it is my understanding that not requiring tty for sudo is the reason for the message. But I have the following in ks.cfg:
sed -i 's/^.*requiretty/#Defaults requiretty/' /etc/sudoers
Could the issue be that there's something I need to set on the windows vagrant ssh side so that a psuedo-tty is created?
This is my first go at packer.
I am using a packer build that I downloaded.
packer.json below:
{
"variables": {
"version": "{{env `VERSION`}}"
},
"provisioners": [
{
"type": "shell",
"execute_command": "sudo {{.Vars}} sh {{.Path}}",
"scripts": [
"scripts/vagrant.sh",
"scripts/vmtools.sh",
"scripts/cleanup.sh",
"scripts/zerodisk.sh"
]
}
],
"post-processors": [
{
"type": "vagrant",
"output": "INSANEWORKS-CentOS-7.0-x86_64-{{user `version`}}-{{.Provider}}.box"
}
],
"builders": [
{
"type": "virtualbox-iso",
"iso_url": "http://ftp.iij.ad.jp/pub/linux/centos/7/isos/x86_64/CentOS-7-x86_64-NetInstall-1503.iso",
"iso_checksum": "498bb78789ddc7973fe14358822eb1b48521bbaca91c17bd132c7f8c903d79b3",
"iso_checksum_type": "sha256",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_wait_timeout": "45m",
"ssh_disable_agent": "true",
"boot_command": [
"<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg<enter><wait>"
],
"disk_size": "40000",
"hard_drive_interface": "sata",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"guest_additions_sha256": "7b61f523db7ba75aebc4c7bb0cae2da92674fa72299e4a006c5c67517f7d786b",
"guest_os_type": "RedHat_64",
"headless": "true",
"http_directory": "http",
"shutdown_command": "sudo /sbin/halt -p",
"vboxmanage": [
[ "modifyvm", "{{.Name}}", "--memory", "1024" ],
[ "modifyvm", "{{.Name}}", "--cpus", "1" ]
]
}
]
}
Thanks in advance.
You have to enable a PTY in your ssh connection. Add in your builders section following configuration item:
"ssh_pty" : "true"
See also https://packer.io/docs/templates/communicator.html#ssh_pty
Your "execute_command" in provisioner section should be "execute_command" : "echo 'vagrant' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'"
For a similar error message - 'sudo: no tty present and no askpass program specified' - I found the solution in this article:
http://blog.endpoint.com/2014/03/provisioning-development-environment_14.html
In addition to adding "ssh_pty" : "true" in the builder section, add the following provisioner:
{
"type": "shell",
"execute_command": "echo '{{user `ssh_pass`}}' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'",
"inline": [
"echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers"
]
}
Stack:
Host - Mac
Packer builder type - virtualbox-iso
(using vagrant)
Related
I have encountered an issue during provisioning with HashiCorp Packer for virtualbox-iso on Alpine Linux v3.16.
Provisioning script runs OK, and it logs that build has finished, however when I open the outputted ovf file in VirtualBox moved files and docker are not present.
I would be grateful for any advice.
I run packer build packer-virtualbox-alpine-governator.json
packer-virtualbox-alpine-governator.json file:
{
"variables": {
"password": "packer"
},
"builders": [
{
"type": "virtualbox-iso",
"memory": 8192,
"guest_os_type": "Other_64",
"iso_url": "https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-standard-3.16.0-x86_64.iso",
"iso_checksum": "file:https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-standard-3.16.0-x86_64.iso.sha256",
"ssh_username": "root",
"ssh_password": "{{user `password`}}",
"shutdown_command": "poweroff",
"hard_drive_interface": "sata",
"boot_command": [
"root<enter><wait>",
"setup-alpine<enter><wait>us<enter><wait>us<enter><wait><enter><wait><enter><wait><enter><wait><enter><wait5>{{user `password`}}<enter><wait>{{user `password`}}<enter><wait><enter><wait><enter><wait><enter><wait15><enter><wait>openssh<enter><wait>openssh-full<enter><wait5>test123<enter><wait5>test123<enter><wait><enter><wait><enter><wait>sda<enter><wait>sys<enter><wait>y<enter><wait30>",
"echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config<enter><wait>",
"/etc/init.d/sshd restart<enter><wait5>"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": ["mkdir -p /opt/site/governator"]
},
{
"type": "file",
"source": "files/docker-compose.yaml",
"destination": "/opt/site/"
},
{
"type": "file",
"source": "files/governator.conf",
"destination": "/opt/site/governator/"
},
{
"type": "shell",
"scripts": [
"scripts/alpine/install-docker-on-alpine.sh"
]
}
]
}
./scritps/alpine/install-docker-on-alpine.sh
#! /bin/ash
cat > /etc/apk/repositories << EOF; $(echo)
https://dl-cdn.alpinelinux.org/alpine/v$(cut -d'.' -f1,2 /etc/alpine-release)/main/
https://dl-cdn.alpinelinux.org/alpine/v$(cut -d'.' -f1,2 /etc/alpine-release)/community/
https://dl-cdn.alpinelinux.org/alpine/edge/testing/
EOF
apk update
apk add docker
addgroup $USER docker
rc-update add docker boot
service docker start
apk add docker-compose
sync
I am using anisble provisioner.. while executing ansible provisioner it is saying no such option
{
"variables":
{
"aws_access_key": "",
"aws_secret_key": "",
"revision": "0",
"ansible_host":""
},
"builders":[{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-2",
"instance_type": "t2.micro",
"source_ami": "ami-09e1c6dd3bd60cf2e",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*",
"root-device-type": "ebs"
}},
"ssh_username": "ubuntu",
"ami_name":"honebackend {{ isotime | clean_ami_name }}"
}],
"provisioners":[
{
"type":"shell",
"script":"scripts/ssh_agent.sh"
},
{
"type":"ansible",
"playbook_file":".././ansible/nodejs.yml",
"extra_arguments": [ "-vvv --extra-vars 'ansible_host={{user `host`}} ansible_python_interpreter=/usr/bin/python3'"]
}
]
}
After running this command:
packer build -var 'aws_access_key=...' -var 'aws_secret_key=...' packer.json
It is giving following error:
==> amazon-ebs: Provisioning with Ansible...
==> amazon-ebs: Executing Ansible: ansible-playbook --extra-vars packer_build_name=amazon-ebs packer_builder_type=amazon-ebs -i /tmp/packer-provisioner-ansible845262359 /var/honmanagement/ansible/nodejs.yml -e ansible_ssh_private_key_file=/tmp/ansible-key022072728 -vvv --extra-vars 'ansible_host= ansible_python_interpreter=/usr/bin/python3'
amazon-ebs: Usage: ansible-playbook [options] playbook.yml [playbook2 ...]
amazon-ebs:
amazon-ebs: ansible-playbook: error: no such option: -
Your extra_arguments are wrong. It should be:
"extra_arguments": [
"-vvv",
"--extra-vars",
"'ansible_host={{user `host`}} ansible_python_interpreter=/usr/bin/python3'"
]
I build a custom box with packer:
"builders": [{
"type": "vmware-iso",
"iso_urls": "http://mirror.vtti.vt.edu/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1611.iso",
"iso_checksum_type": "sha256",
"iso_checksum": "27bd866242ee058b7a5754e83d8ee8403e216b93d130d800852a96f41c34d86a",
"boot_wait": "10s",
"disk_size": 81920,
"output_directory": "/home/aida/vmware-packer/",
"guest_os_type": "redhat",
"headless": true,
"http_directory": "http",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo 'vagrant'|sudo -S /sbin/halt -h -p",
"vm_name": "packer-centos-7-x86_64",
"vmx_data": {
"memsize": "4096",
"numvcpus": "2"
},
"boot_command" : [
"<tab> text ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks.cfg<enter><wait>"
]
}],
"provisioners": [
{
"type": "shell",
"scripts": [
"scripts/vagrant.sh",
"scripts/vmware.sh",
"scripts/vagrant.sh",
"scripts/sshd.sh",
"scripts/cleanup.sh"
],
"execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E bash '{{.Path}}'"
}
],
"post-processors": [{
"output": "builds/{{.Provider}}-centos7.box",
"type": "vagrant"
}]
}
Then, I add this to the vagrant box. Now, I'm trying to use vagrant up, but I received this error:
The provider 'vmware_desktop' could not be found, but was requested to back the
machine 'default'. Please use a provider that exists.
I tried to add a vmware workstation plugin but I faced whith another error which is I need a specefic licence. (I have a work station pro licence)
So, do you have any idea what should I do?
(Just repeating what Frédéric Henri already pointed out, since it's the correct answer.)
You need a Vagrant VMware plugin license to use VMware with Vagrant, see
Vagrant + VMware.
I use the following template to create vagrant box with packer. However, i get error "Build 'vmware-vmx' errored: Timeout waiting for SSH.". How to fix this?
{
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"ssh_wait_timeout": "30s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}],
"provisioners": [{
"type": "shell",
"inline": ["echo 'my additional provisioning steps'"]
}],
"post-processors": [{
"type": "vagrant",
"keep_input_artifact": true,
"output": "mycentos.box"
}]
}
Set headless parameter of the builder to false. Start the build and watch out for an error. If no error occurs then increase the timeout parameter. 30s is a bit small for instantiating, cloning and running the vm.
in your case:
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"headless" : false,
"ssh_wait_timeout": "1000s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}]
When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "vmware-vmx",
"communicator": "none"
}
]
}
Packer Builders DOCU vmware-vmx
I just finished my Vagrant box.
It has Gemfire and some other things on it. The provision works great but it has several configuration files that need to go with it.
How can I share my box in a way that the those files are carried with it?
An example:
To start a Gemfire server you need some region configs, so I want my project regions to be there in order for it to start in a ready to develop state.
How can I do that?
Sounds like a feature from Packer. Package the your box with Packer with the Virtualbox builder. First my example and then I will explain the feature. I am using this packer.json to package my vagrant box:
{
"builders": [
{
"type": "virtualbox-ovf",
"source_path": "Redistribution.ovf",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_wait_timeout": "10000s",
"headless": true,
"guest_additions_mode": "disable",
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "1024"],
["modifyvm", "{{.Name}}", "--cpus", "2"]
],
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}
],
"provisioners": [
{
"type": "shell",
"script": "packer/putInsecureKey.sh"
}
],
"post-processors": [
{
"type": "vagrant",
"compression_level": 9,
"output": "vagrant-dockerdev.box"
}
]
}
I have installed Packer 0.8.1 and run this command on the exported Vagrant box:
packer build packer.json
The following parameters for the post-processor will put files inside the box which can be referenced by a Vagrantfile. See Packer Documentation.
include (array of strings) - Paths to files to include in the Vagrant box. These files will each be copied into the top level directory of the Vagrant box (regardless of their paths). They can then be used from the Vagrantfile.
vagrantfile_template (string) - Path to a template to use for the Vagrantfile that is packaged with the box.
Now back to my example:
"post-processors": [
{
"type": "vagrant",
"compression_level": 9,
"output": "vagrant-dockerdev.box"
"include": ["path/to/file1", "path/to/file2"]
"vagrantfile_template": "path/to/vagrant-template"
}
]