I have a VMWARE image running CentOS.I want to create a vagrant box from it with packer. I am new to Vagrant and can anyone suggest the steps?
Using Packer to apply additional provisioning steps to an existing VM is supported by Packer via the vmware-vmx builder
This VMware Packer builder is able to create VMware virtual machines
from an existing VMware virtual machine (a VMX file). It currently
supports building virtual machines on hosts running VMware Fusion
Professional for OS X, VMware Workstation for Linux and Windows, and
VMware Player on Linux.
In your situation where you have an existing CentOS VMX and want to turn it into a Vagrant box you would create packer.json configuration file like so:
{
"builders": [{
"type": "vmware-vmx",
"source_path": "/path/to/a/vm.vmx",
"ssh_username": "root",
"ssh_password": "root",
"ssh_wait_timeout": "30s",
"shutdown_command": "echo 'packer' | sudo -S shutdown -P now"
}],
"provisioners": [{
"type": "shell",
"inline": ["echo 'my additional provisioning steps'"]
}],
"post-processors": [{
"type": "vagrant",
"keep_input_artifact": true,
"output": "mycentos.box"
}]
}
Packer would clone the source VMX, boot the box, apply any provisioning steps you had, shut down the box, and then output a new Vagrant ".box" file.
It sounds like you won't be able to.
Packer assumes a base box (for vagrant) and ends at a new box. You can't go from a running VM to a box via Packer.
If you started the CentOS VM using vagrant, you can do vagrant export
If you have a running VM you made manually, your best bet is to start over using a Vagrant box. If you want to continue with this route: http://docs.vagrantup.com/v2/vmware/boxes.html
Related
I'm using vsphere-clone as the builder and ansible-playbook as the provisioner to build my machine.
In one of my ansible tasks, I'm rebooting the machine (after installing some packages and changing network interfaces names), but sometimes my VM is getting a different IP address from DHCP and the ansible playbook cannot continue to the rest of the tasks. I tried the ansible.builtin.setup:
- name: do facts module to get latest information
setup:
But it's not refreshing the IP. Also tried reboot with shell provisioner instead:
{
"type": "shell",
"inline": ["echo {{user `ssh_password`}} | sudo -S reboot"],
"expect_disconnect": true,
"inline_shebang": "/bin/bash -x"
}
But the next provisioners also uses the old IP. Is there a way with Packer, to refresh the IP?
I'm using Docker on a Window 10 laptop. I recently tried to get some code to run in a container to connect to another server on the network. I ended up making a Ubuntu container and found the issue is a IP conflict between the docker network and the server resource (172.17.1.3).
There appears to be an additional layer of networking on the Windows Docker setup with isn't present on the Unix system, and the docker comments to "simply using a bridge network" doesn't resolve this issue.
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "d60dd1169153e8299a7039e798d9c313f860e33af1b604d05566da0396e5db19",
"Created": "2020-02-28T15:24:32.531675705Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Is it possible to change the subnet/gateway to avoid the IP conflict? If so how? I tried the simple thing and making a new docker network:
docker network create --driver=bridge --subnet=172.15.0.0/28 --gateway=172.15.0.1 new_subnet_1
There still appears to have a conflict somewhere, I can reach other devices just nothing in 172.17.0.0/16. I assume guessing it's somewhere in the HyperV, vEthernet adapter, or vswitch.
UPDATE 1
I took a look at wireshark (PC level) with the new_subnet_1 network and I did not see these packets leave the vSwitch interface or the PC's NIC.
I did see this Docker forum which is indicating an issue with the Hyper-V and V-switch that could be the issue.
Docker Engine v19.03.5
DockerDesktopVM created by Docker for Windows install
UPDATE 2
After several Hyper-v edits and putting the environment back together I check the DockerDesktopVm. After getting in from a privileged container I found that the docker0 network had the IP conflict. Docker0 is appears to be the same default bridge network that I was avoiding, because it is a pre-defined network it cannot be removed, and all my traffic is being sent to it.
After several offshoots, and breaking my environment at least once, I found that the solution was easier then I had though.
Tuned off Docker Desktop Services
Added the following line to the %userprofile%\.docker\deamon.json file in windows 10
....lse,
"bip": "172.15.1.6/24" <<new non conflicting range
}
Restarted Docker Desktop Service
Easy solution after chasing options in Hyper-V and the Docker Host Linux VM.
There are 3 machines:
1."desktop"
Windows 10 desktop
(Has the windows shared folder in its hard disk).
NOTE: This is my colleague's machine.
2."laptop"
Windows 10 laptop
(In the same local area network of "desktop", and can read and write the shared folder).
NOTE: This is my machine.
3."vagrant-centos7"
vagrant virtual machine centos7
(a guest machine hosted at "laptop" use vagrant virtualbox provider).
Now my question is like this:
I use "laptop", need make a "desktop" windows shared folder to be a synced folder to "vagrant-centos7".
I tried this vagrantfile setting, but faild.
"z://" is windows map network drive of the "desktop" share folder.
192.168.0.108 is the ip of "desktop"
config.vm.synced_folder "Z://", "/vagrant_data2",
type: "smb",
smb_host: "192.168.0.108",
smb_username: "abc",
smb_password: "123456"
The error message is:
Host path: Z:
Stderr: dev or path not exists.
thanks for any helps :)
I have figured out how to use Sublime SFTP with Vagrant. But I constantly am switching between multiple Vagrant VMs and running multiple VMs at once. In order to connect Sublime SFTP to the VM, you have to set the host:
"host": "127.0.0.1",
"user": "vagrant",
//"password": "",
"port": "2222",
"ssh_key_file": "/home/jeremy/.vagrant/machines/inspire/virtualbox/private_key",
The only problem is the "port": "222" field will change depending on when I start up which VMs and how many I am running. So it makes it impossible to use sublime with these VMs with having to reconfigure the sftp_servers file first. Is there any way to permanently assign the port to the VM or a better way to accomplish what I am trying to do?
in your Vagrantfile you can define the ssh port with property config.ssh.port
I have a vagrantfile using a box on top of virtualbox with a provision script.
Now I am trying to use packer to output a box already after provision.
However I cannot find a builder to use the ".box" file I already have. What am I doing wrong?
I just got a solution to this tiny little problem (convert a vagrant .box file to .ova for use by packer):
Create a vm using the .box file as a base. I use this Vagrantfile, with box opscode-centos-7.0:
$provisioning_script = <<PROVISIONING_SCRIPT
adduser packer
echo "packer" | passwd packer --stdin
echo "packer ALL=(ALL:ALL) NOPASSWD: ALL" > /etc/sudoers.d/packer
PROVISIONING_SCRIPT
Vagrant.configure(2) do |config|
config.vm.box = "opscode-centos-7.0"
config.ssh.insert_key = false
config.vm.provider "virtualbox" do |v|
v.name = "packer-base"
end
config.vm.provision :shell, inline: $provisioning_script
end
run vagrant up
run vagrant halt
run vboxmanage export --ovf20 -o packer-base.ova packer-base
run vagrant destroy
This also creates the packer user with a default password so that packer can easily connect to the instance to do stuff. Also note the insert_key parameter that will prevent replacing the vagrant default insecure key with a secure one and allow subsequent vagrant setups to properly connect via SSH to the new images (after packer is done).
Packer out-of-the-box doesn't support using Vagrant boxes as input (yet).
But there is a custom plugin, see this comment.
If you want to build a vagrant box that runs with provider virtualbox, have a look here.
However, it takes an iso or ovf as input, not a vagrant box.
Have a look at these templates to get you started using the virtualbox builder with packer.
Make sure your run the post-processor to convert the virtualbox vm into a vagrant box.