Vagrant with https, can't connect from host to guest - https

If I run apache and varnish on vagrant and run the following on the guest and host it works fine:
//guest
wget http://localhost/app_dev.php
//host
wget http://localhost:8080/app_dev.php
My Vagrantfile looks like this:
config.vm.network "forwarded_port", guest: 80, host: 8080
Now I'll try ssl so change it to
config.vm.network "forwarded_port", guest: 443, host: 8080
Then on the guest I start httpd, varnish and pound. Now I can't connect anymore from host:
//on guest:
wget --no-check-certificate https://localhost:443/app_dev.php
//results in 200 OK
//on host
wget --no-check-certificate https://localhost:8080/app_dev.php
//results in
//--2014-06-22 23:43:34-- https://localhost:8080/app_dev.php
//Resolving localhost (localhost)... 127.0.0.1
//Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
//Unable to establish SSL connection.
Not sure what the problem is here, is it not allowed to create ssh over 8080?
When trying the following in Vagrantfile
config.vm.network "forwarded_port", guest: 443, host: 443
I get a warning while starting up:
==> default: You are trying to forward to privileged ports (ports <= 1024). Most
==> default: operating systems restrict this to only privileged process (typically
==> default: processes running as an administrative user). This is a warning in case
==> default: the port forwarding doesn't work. If any problems occur, please try a
==> default: port higher than 1024.
But still the same error when trying a wget from host.
Is it possible to make https connection from host to guest with vagrant? If so then how?
I'm using the fedora 20 box. Tried with the following settings in Vagrantfile:
config.vm.network "private_network", ip: "33.33.33.10"
Then added to my hosts
33.33.33.10 site
When I start httpd, varnish and pound on the guest (httpd listens to 8080, varnish to 80 and Pound to 443) I can get http:site/, http:site:8080 but no https:site (had to remove // or can't post it) where a wget from guest works (response 200 with expected html)
On the guest I've tried
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
But same result, I can't think of a reason why Vagrant fedora box would have https port blocked though but it could be as I've got no idea how to use iptables.

It was a problem in pound, the /etc/pound.cfg looked like:
ListenHTTPS
Address localhost
Port 443
changed to:
ListenHTTPS
Address 33.33.33.10
Port 443
Solved the problem

Related

CouchDB on Vagrant VM (Scotch box)

I'm trying to set up CouchDB (and ultimately use PouchDB) on a Scotch Box VM. The VM runs fine, and includes port forwarding for port 5849 by including the code below:
Vagrant.configure("2") do |config|
config.vm.box = "scotch/box-pro"
config.vm.hostname = "scotchbox"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.network "forwarded_port", guest: 5984, host: 5984
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.network "private_network", ip: "192.168.33.10"
The VM runs fine. localhost:8080 will load the PHP home page, and localhost:3000 will load a node script if I have the node server running, but localhost:5984 only returns an empty response when loaded from the browser or from the host machine command line using curl curl: (52) Empty reply from server.
When I have used vagrant ssh to access the VM, I can use curl localhost:5984 to obtain {"couchdb":"Welcome","uuid":"9cabeb8f66947adabe9443594aa7f69c","version":"1.6.0","vendor":{"version":"15.10","name":"Ubuntu"}} as expected.
Here is the guide I've been referring to: https://pouchdb.com/guides/setup-couchdb.html
Additional info: When I go to 192.168.33.10:5984 (instead of using the localhost port forwarding), the result is a refused connection.
Any suggestions as to what my issue might be? I had thought it was a forwarding issue, but the ports 8080 and 3000 work fine, and going to the IP:5984 doesn't work so it makes me wonder. I also thought maybe the service isn't running, but doing ssh on the VM and running curl seems to indicate that it is in fact running.
Thanks!
By default CouchDB bound to a localhost address 127.0.0.1 and you need to re-bind it to 0.0.0.0 to get it accessible from outside of Vagrant box. To do that you need to change parameter bind_address in [httpd] section of default.ini config file or add the same as override in local.ini config file.

How to share a vagrant machine with https

I have a working vagrant VM I want to Share. In my Vagrantfile I have:
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.network "private_network", ip: "192.168.1.15"
config.vm.network "forwarded_port", guest: 443, host: 443
in the virtual host I have
<VirtualHost *:443>
...
ServerAlias *.vagrantshare.com
....
</Virtualhost>
not sure about the first line but it was there already
I share the machine with
vagrant share --https 443
this is the output:
==> default: Detecting network information for machine...
default: Local machine address: 127.0.0.1
default:
default: Note: With the local address (127.0.0.1), Vagrant Share can only
default: share any ports you have forwarded. Assign an IP or address to your
default: machine to expose all TCP ports. Consult the documentation
default: for your provider ('virtualbox') for more information.
default:
default: Local HTTP port: 8080
default: Local HTTPS port: 443
default: Port: 2222
default: Port: 443
default: Port: 8080
==> default: Checking authentication and authorization...
==> default: Creating Vagrant Share session...
default: Share will be at: towering-badger-9312
==> default: Your Vagrant Share is running! Name: towering-badger-9312
==> default: URL: http://towering-badger-9312.vagrantshare.com
==> default:
==> default: You're sharing your Vagrant machine in "restricted" mode. This
==> default: means that only the ports listed above will be accessible by
==> default: other users (either via the web URL or using `vagrant connect`).
I can see it in vagrant cloud but I got an error while trying to access it via https:
towering-badger-9312.vagrantshare.com is currently unable to handle this request.
HTTP ERROR 500
not any other useful message in the console, any idea how to debug this?
thanks
Replace this line
config.vm.network "forwarded_port", guest: 443, host: 443
with i.e.
config.vm.network "forwarded_port", guest: 443, host: 8443
first, because forwarded_port is for accessing from your host and second, you should not be able to bind to port 443 on host.
Also
vagrant share --https 443
is redundant (docs):
HTTPS (SSL)
Vagrant Share can also expose an SSL port that can be accessed over
SSL. For example, instead of accessing http://foo.vagrantshare.com, it
could be accessed at https://foo.vagrantshare.com.
vagrant share by default looks for any SSL traffic on port 443 in your
development environment. If it cannot find any, then SSL is disabled
by default.
so
vagrant share
should suffice (assuming there's no other issue).

How do I remove a forwarded port in Vagrant?

I downloaded a Vagrantfile and am running it on my CentOS 7 box. When I execute vagrant up, the process starts successfully and the machine is booted and ready. I'm able to access the process using the URL:
http://<IP_ADDRESS_OF_BOX>:8080
However, I don't want Vagrant to use port 8080 and would rather use an obscure port like 8601. So, I modified the Vagrantfile to include another entry for config.vm.network.
Before change - Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = 'ToraToraTora'
end
After change - Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = 'ToraToraTora'
config.vm.network "forwarded_port", guest: 80, host: 8601
end
Now I'm able to access the process using the new port:
http://<IP_ADDRESS_OF_BOX>:8601
However, the previous port continues to work too:
http://<IP_ADDRESS_OF_BOX>:8080
Executing sudo netstat -tulpn:
[ToraToraTora#andromeda ~]$ sudo netstat -tulpn | grep 26206
tcp 0 0 127.0.0.1:2222 0.0.0.0:* LISTEN 26206/VBoxHeadless
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 26206/VBoxHeadless
tcp 0 0 0.0.0.0:8601 0.0.0.0:* LISTEN 26206/VBoxHeadless
udp 0 0 0.0.0.0:40168 0.0.0.0:* 26206/VBoxHeadless
[ToraToraTora#andromeda ~]$
Output from running vagrant port:
[ToraToraTora#andromeda app]$ vagrant port
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.
22 (guest) => 2222 (host)
80 (guest) => 8080 (host)
80 (guest) => 8601 (host)
[ToraToraTora#andromeda app]$
How do I stop the Vagrant process from using port 8080 and ONLY use port 8601?
You could explicitly disable the 8080 forwarded port...
Vagrant.configure(2) do |config|
config.vm.box = 'ToraToraTora'
config.vm.network "forwarded_port", guest: 80, host: 8601
config.vm.network "forwarded_port", guest: 80, host: 8080, disabled: true
end
If you make that change and do a vagrant reload, it will clear the 8080 forwarded port. At that point, you can remove the 8080 line from your Vagrantfile.
NOTE: Port forwarding in Vagrant can be compared to radio broadcasts. Guest ports are like radio stations while host ports are like radios. In the same way that a radio station can broadcast to any number of radios, a guest port on the Vagrant machine can be forwarded to multiple ports on the host machine. However, each host port can only receive forwarded traffic from one guest port at a time in the same way that a radio can only be tuned to one station at a time.
So in this case, two radios (ports 8601 and 8080 on the host) were tuned to the same station (port 80 on the guest). The solution was simply to switch off the radio at 8080.
if you are able to use http://<IP_ADDRESS_OF_BOX> it sounds to me you're using vagrant private network with a static IP, in such case, all ports are accessible on the IP and you do not necessarily need the forward_port option.
Also when netstat with your options run it with sudo netstat -tulpn so you'll find the PID/Program Name behind

Vagrant, NFS, port 80 and sudo

I have a Vagrant environment that I'd very much like to launch on port 80 using NFS. The former because it's Drupal and the non-standard port is causing a bit of heartburn and the latter purely for performance. To do that, as far as I know, I need to use sudo. No problem, sudo vagrant up it is.
The problem I'm running into is that the app generates files and, presumably because the VM was stood up under sudo, those files are owned by root on the host system (OS X) so when the app attempts to write files to the server, permission is denied.
I've altered my Vagrant file to set the entire project directory to 777. Just for the sake of disclosure, here are the relevant (and non-standard) snippets from my Vagrantfile:
config.vm.network :hostonly, "192.168.33.10"
config.vm.forward_port 80, 80
# config.vm.share_folder( "v-root", "/vagrant", ".", :nfs => (RUBY_PLATFORM =~ /mingw32/).nil?, :extra => 'dmode=777,fmode=777' )
config.vm.share_folder( "v-root", "/vagrant", ".", :extra => 'dmode=777,fmode=777' )
config.vm.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
Vagrant::Config.run do |config|
config.vm.provision :shell, :path => "provision.vm.sh"
end
Here's what I see happening:
When I boot up from a clean clone of the repository, no problems. Yay.
I do, however, notice that the content of /vagrant, when viewed from the VM itself, is not given full perms (777). This was the case before these changes.
When I boot up after halting the VM...
Generated files can't be written.
Files in /vagrant (again, when viewed from within the VM) are owned by a dialout user. This seems to be an NFS thing, so no problem as long as I can get the first item to work.
UPDATE
Looks like the problem might be my understanding of how NFS works. I'll need to try to rectify that, but if I just remove the NFS component (now commented & replaced in the snipped above) things seem much more usable. Would still love to know if/how others may have handled this.
Instead of doing sudo vagrant up - which isn't ideal - I'm doing the following:
if Vagrant::Util::Platform.windows?
config.vm.network :forwarded_port, host: 80, guest: 8080
elsif Vagrant::Util::Platform.darwin?
config.vm.network :forwarded_port, host: 8080, guest: 80
config.vm.network :forwarded_port, host: 8443, guest: 443
config.trigger.after [:provision, :up, :reload] do
puts " ==> Sudo Password (to forward ports) "
system('echo "
rdr pass on lo0 inet proto tcp from any to 127.0.0.1 port 80 -> 127.0.0.1 port 8080
rdr pass on lo0 inet proto tcp from any to 127.0.0.1 port 443 -> 127.0.0.1 port 8443
" | sudo pfctl -f - > /dev/null 2>&1; echo "==> Fowarding Ports: 80 -> 8080, 443 -> 8443"')
end
else
config.vm.network :forwarded_port, host: 8080, guest: 80
config.vm.network :forwarded_port, host: 8443, guest: 443
puts " ==> Sudo Password (to forward ports) "
system("sudo ipfw add 100 fwd 127.0.0.1,8080 tcp from any to me 80;
sudo ipfw add 101 fwd 127.0.0.1,8443 tcp from any to me 443")
end
if Vagrant::Util::Platform.darwin?
config.trigger.after [:halt, :destroy] do
system("sudo pfctl -f /etc/pf.conf > /dev/null 2>&1; echo '==> Removing Port Forwarding'")
end
end
(The linux stanza of that is less neat than the OS X ("darwin") stanza, which removes the ports being forwarded on vagrant halt)
What this is doing is setting up port 8080 (and 8443) on the host machine to forward to 80 on the guest, and then using sudo to forward port 80 on the host machine to port 8080 on the host machine.
This means that only the port 80 forwarding is being done as root, instead of the whole vagrant process, and generally makes me happier.
Note: This will still fail on a desktop machine if skype is binding itself to ports 80 and 443, which it does by default.

How to debug "Vagrant cannot forward the specified ports on this VM" message

I'm trying to start a Vagrant instance and getting the following message:
Vagrant cannot forward the specified ports on this VM, since they
would collide with another VirtualBox virtual machine's forwarded
ports! The forwarded port to 4567 is already in use on the host
machine.
To fix this, modify your current projects Vagrantfile to use another
port. Example, where '1234' would be replaced by a unique host port:
config.vm.forward_port 80, 1234
I opened VirtualBox, but I don't have any running boxes at the moment, so I'm stumped. How can I figure out which process is listening on 4567? Is there a way to list all Vagrant boxes running on my machine?
Thanks,
Kevin
You can see what vagrant instances are running on your machine by running
$ vagrant global-status
id name provider state directory
----------------------------------------------------------------------
a20a0aa default virtualbox saved /Users/dude/Downloads/inst-MacOSX
64bc939 default virtualbox saved /Users/dude/svn/dev-vms/ubuntu14
a94fb0a default virtualbox running /Users/dude/svn/dev-vms/centos5
If you don't see any VMs running, your conflict is not a vagrant box (that vagrant knows about). The next thing to do is to fire up the VirtualBox UI, and check to see if it has any instances running. If you don't want to run the UI, you can:
ps -ef |grep VBox
If you have VirtualBox instances running, they should be included in that output. You should be able to just kill processes that have VirtualBox in their output. One problem is that one of those processes seems to exist to do keep-alives. Just kill off the highest VirtualBox process. If you have a VirtualBox image running but vagrant doesn't know about it, some Vagrant directories may have been deleted manually, which means Vagrant loses track of the instance.
Watch out, your Vagrantfile is not the only one being used when bringing up a Vagrant box/instance.
When you get this:
~/dev/vagrant user$ vagrant reload
Vagrant cannot forward the specified ports on this VM, since they
would collide with some other application that is already listening
on these ports. The forwarded port to 8001 is already in use
on the host machine.
To fix this, modify your current projects Vagrantfile to use another
port. Example, where '1234' would be replaced by a unique host port:
config.vm.network :forwarded_port, guest: 8001, host: 1234
Sometimes, Vagrant will attempt to auto-correct this for you. In this
case, Vagrant was unable to. This is usually because the guest machine
is in a state which doesn't allow modifying port forwarding.
~/dev/vagrant user$
You are actually not only using the Vagrantfile from ~/dev/vagrant but also the one from your "box" distribution .box file which is typically located here:
~/.vagrant.d/boxes/trusty/0/virtualbox/Vagrantfile
And if you have a look at it you'll see it has plenty of default port mappings:
$ cat ~/.vagrant.d/boxes//trusty/0/virtualbox/Vagrantfile
$script = <<SCRIPT
bzr branch lp:jujuredirector/quickstart /tmp/jujuredir
bash /tmp/jujuredir/setup-juju.sh
SCRIPT
Vagrant.configure("2") do |config|
# This Vagrantfile is auto-generated by 'vagrant package' to contain
# the MAC address of the box. Custom configuration should be placed in
# the actual 'Vagrantfile' in this box.
config.vm.base_mac = "080027DFD2C4"
config.vm.network :forwarded_port, guest: 22, host: 2122, host_ip: "127.0.0.1"
config.vm.network :forwarded_port, guest: 80, host: 6080, host_ip: "127.0.0.1"
config.vm.network :forwarded_port, guest: 8001, host: 8001, host_ip: "127.0.0.1"
config.vm.network "private_network", ip: "172.16.250.15"
config.vm.provision "shell", inline: $script
end
# Load include vagrant file if it exists after the auto-generated
# so it can override any of the settings
include_vagrantfile = File.expand_path("../include/_Vagrantfile", __FILE__)
load include_vagrantfile if File.exist?(include_vagrantfile)
So, go ahead and edit this file to remove the offending colliding forwarding port(s):
config.vm.network :forwarded_port, guest: 22, host: 2122, host_ip: "127.0.0.1"
config.vm.network :forwarded_port, guest: 80, host: 6080, host_ip: "127.0.0.1"
# config.vm.network :forwarded_port, guest: 8001, host: 8001, host_ip: "127.0.0.1"
By:
~/dev/vagrant user$ cp ~/.vagrant.d/boxes//trusty/0/virtualbox/Vagrantfile ~/.vagrant.d/boxes//trusty/0/virtualbox/Vagrantfile.old
~/dev/vagrant user$ vi ~/.vagrant.d/boxes//trusty/0/virtualbox/Vagrantfile
and watch out for other Vagrantfiles inclusion i.e.:
include_vagrantfile = File.expand_path("../include/_Vagrantfile", __FILE__)
And now it works:
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'trusty'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM: vagrant_default_1401234565101_12345
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 => 2122 (adapter 1)
default: 80 => 6080 (adapter 1)
default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
default: /vagrant => /Home/user/dev/vagrant/vagrant-docker
==> default: Running provisioner: shell...
default: Running: inline script
...
Hope this helps.
As message says, the port collides with the host box. I would simply change the port to some other value on the host machine. So if I am getting error for
config.vm.forward_port 80, 1234
then I would change it to
config.vm.forward_port 80, 5656
As 1234 might be used on my host machine.
For actually inspecting ports on any machine, I use the tcpview utility for that OS and get to know which port is used where.
I ran into this problem and it turned out RubyMine was still holding on to a port. I found out which application was holding on to the port (31337 in my case) by running this command:
lsof -i | grep LISTEN
Output
node 1396 richard.nienaber 7u IPv4 0xffffff802808b320 0t0 TCP *:20559 (LISTEN)
Dropbox 1404 richard.nienaber 19u IPv4 0xffffff8029736c20 0t0 TCP *:17500 (LISTEN)
Dropbox 1404 richard.nienaber 25u IPv4 0xffffff8027870160 0t0 TCP localhost:26165 (LISTEN)
rubymine 11668 richard.nienaber 39u IPv6 0xffffff8024d8e700 0t0 TCP *:26162 (LISTEN)
rubymine 11668 richard.nienaber 65u IPv6 0xffffff8020c6e440 0t0 TCP *:31337 (LISTEN)
rubymine 11668 richard.nienaber 109u IPv6 0xffffff8024d8df80 0t0 TCP localhost:6942 (LISTEN)
rubymine 11668 richard.nienaber 216u IPv6 0xffffff8020c6ef80 0t0 TCP localhost:63342 (LISTEN)
Also note that (in Vagrant 1.6.4 at least) there is the folder ~/.vagrant.d/data/fp-leases, with files having names like 8080, 8081 etc. Erasing this folder contents helped me just now.
Vagrant.configure("2") do |config|
config.vm.network "forwarded_port", guest: 80, host: 8080,
auto_correct: true
end
The final :auto_correct parameter set to true tells Vagrant to auto correct any collisions. During a vagrant up or vagrant reload, Vagrant will output information about any collisions detections and auto corrections made, so you can take notice and act accordingly.
https://www.vagrantup.com/docs/networking/forwarded_ports.html
If you use Proxifier (or a similar app) try closing it first. This was a problem I experienced due to Proxifier on OSX 10.9.
I fixed it this way:
vagrant suspend
Close Project on RubyMine IDE
vagrant resume
Open Recent on RubyMine IDE
My observation:
I did not have any processes running on port 8000, so essentially the port forwarding did not work.
Fix:
Phil's answer provided a solution
~/.vagrant.d/boxes/
The above path had other versions of vagrant files that listed the port 8000. Once I pruned them all using the below command I was able to run vagrant up successfully
vagrant box remove [name] --all
The way out:
$ vagrant suspend
$ vagrant resume
I encountered this issue because I had a VM that was trying to run Postgres, and I had Postgres running on my local machine on port 5432.
After vagrant resume, I got the error:
Vagrant cannot forward the specified ports on this VM, since they
would collide with some other application that is already listening
on these ports. The forwarded port to 5432 is already in use
on the host machine.
Look for what's running on port 5432:
o-ets-webdeveloper:portal me$ lsof -i :5432
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
postgres 1389 me 5u IPv6 0x681a62dc601cf1e3 0t0 TCP localhost:postgresql (LISTEN)
postgres 1389 me 6u IPv4 0x681a62dc6499362b 0t0 TCP localhost:postgresql (LISTEN)
Turns out it's local Postgres, killing those processes allowed me to run vagrant resume successfully.
You have to modify your Vagrantfile within your current directory including the following command:
config.vm.network "forwarded_port", guest: 4567, host: <a port not used by your host machine>
Keep in mind that there also a hidden folder (.vagrant.d/) containing settings for your vagrant environment as well as config files for your boxes. Usually this folder is in your home directory.
e.g.
~/.vagrant.d/boxes/<your_box_name>/0/virtualbox/Vagrantfile
Usually this file includes another Vagrantfile located in ~/.vagrant.d/boxes/<your_box_name>/0/virtualbox/include/_Vagrantfile
You have to modify this file as well with the port-forwarding command
After a host crash
I had this problem (with a configuration that had worked since several weeks before) after my host machine had crashed. I am using the VMware provider.
The problem
The issue was apparently this (I have not understood it 100%):
VMware still had the port mappings of the pre-crash Vagrant VM run, which were to IP 192.166.157.131.
When Vagrant started, it requested the mappings for a different IP. It could not get them because they were "in use". It reported the ports as taken although Vagrant itself had made the mappings for the same Vagrant box's previous run.
No matter what I did on the Vagrant side, the ports would not be released.
The source of the problem (presumably)
Presumably, the source of my problem was my network configuration:
Vagrant requested config.vm.network "private_network", ip: 192.169.0.3,
which used VMnet5,
but port mappings use NAT, which on my machine is VMnet8.
Repair attempt 1
I have deactivated VMnet5 manually and changed my Vagrantfile to request
config.vm.network "private_network", ip: 192.169.157.131, the very address that already has the required port mappings.
(The cleaner solution would be to use a dynamic IP via
config.vm.network "private_network", type: "dhcp",
but that is inconvenient for my setup.)
It did not help. Vagrant still complained the ports were unavailable.
Repair attempt 2
I deleted the port mappings in VMware Desktop (Edit -> Virtual Network Editor).
It did not help (would you believe this?).
Vagrant still complained the ports were unavailable.
netstat -ao indeed still reported the ports as LISTENING.
I killed the process reported by netstat: 14032.
netstat still reported the ports as listening, now by a different process:
I killed that process 13492.
netstat still reported the ports as listening, now by a different process:
I killed that process 13340.
(Note the decreasing process IDs.)
netstat no longer reported the ports as listening.
Vagrant ridiculously still complained the ports were unavailable.
netstat no longer reported the ports as listening even after that Vagrant error message.
Huh?
There should be no trace now of the murky past in which those ports were occupied!
Repair attempt 3
I was next planning to reboot my host machine and hope for the best.
But before I did this, I closed the VMware Desktop GUI and gave Vagrant one last try.
And then it worked.
Takeaway: Apparently the VMware infrastructure sometimes holds on to a configuration
more closely than one would like
and also more closely than its own GUI claims.
Refer to my answer here: https://superuser.com/a/1610804/1252585
Writing the content again:
To list all of the LISTENING ports:
$ netstat -a
Use the following command to find the process ID of the process running on the desired port:
$ netstat -ano | findstr :8080
The result will be displayed as:
$ netstat -ano | findstr :5000
TCP 0.0.0.0:5000 0.0.0.0:0 LISTENING 18024
Here, 18024 is the PID or Process ID.
Then use the following command to kill the process on the post 8080:
$ taskkill /PID 18024 /F
or $ taskkill //PID 18024 //F
Result will be displayed as:
$ taskkill //PID 18024 //F
SUCCESS: The process with PID 18024 has been terminated.
Just because you have already another Vagrantfile on your machine so they both use the same port, so what you can do is only open the new Vagrantfile.
Create a forwarded port mapping that allows access to a specific port within the machine from a port on the host machine. In the example below:
# accessing "localhost:8080" will access port 80 on the guest machine.
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.network "forwarded_port", guest: 3001, host: 3001
config.vm.network "forwarded_port", guest: 8080, host: 8080
config.vm.network "forwarded_port", guest: 5000, host: 5000
config.vm.network "forwarded_port", guest: 5432, host: 5432 >>> old port
config.vm.network "forwarded_port", guest: 5432, host: 1234 >>> new port
Simply change the host from 5432 to anything like 1234.

Resources