I am trying to use vagrant on Windows system. I already gone through the step of add vagrant box, init it and vagrant up. And I also use PuttyGen and Putty to ssh into the VM as introduced here: http://blog.osteel.me/posts/2015/01/25/how-to-use-vagrant-on-windows.html
Now after installing all necessary packages, I try to run this code on the VM:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'SUCESSFULLY running flask inside centos68 via apache!\n\n'
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
And I also go into my Vagrantfile on local machine and uncomment # config.vm.network “private_network”, ip: “192.168.33.10” by removing the # sign and save it. And it shows running on http://0.0.0:5000/
But when I type in the IP address and port number on browser, it shows:
The site can't be reached. It seems as if VM cannot communicate with local machine.
This kind of problem never occurred in Mac OS, I am wondering if it is because of Putty. Does anyone know how to solve it? Thanks a lot!
Ok, so I used Git Bash instead of Putty for the installation and everything works fine.
Related
Original Post
I have a Windows workstation with WSL2 and Docker installed that I am able to use for container based development in VS Code. I would like to be able to develop inside the containers on this system remotely. I am able to SSH directly into the WSL2 environment on the workstation and am able to start the docker daemon without logging directly into Windows by creating a Task to start the daemon automatically as described here: https://stackoverflow.com/a/59467740/10692741
However when I try to access Docker on the remote machine by following this guide: https://code.visualstudio.com/docs/remote/containers-advanced#_developing-inside-a-container-on-a-remote-docker-host, I get the following error:
error during connect: Get http://docker/v1.24/version: net/http: HTTP/1.x transport connection broken: malformed HTTP status code "\x00c\x00o\x00m\x00m\x00a\x00n\x00d\x00"
I have also tried connecting via a SSH tunnel as outlined here: https://code.visualstudio.com/docs/remote/troubleshooting#_using-an-ssh-tunnel-to-connect-to-a-remote-docker-host and am unable to connect to Docker as well.
Has anyone had success with a setup like this? Or is this not supported due to limitations with Docker on Windows, WSL2, and/or Windows OpenSSH implementation?
Update: 2021-01-21
When I SSH into the Windows machine remotely, I am able to see the docker containers in the VS Code extension. I am able to start them, stop them, and enter into them with the shell. However, when I try to attach VS Code I get same error shown above.
Things that may have possibly affected this over the past couple days:
Adding SSH keys on my local machine to the ssh-agent via ssh-add /my/key
Exposing Docker daemon on tcp://localhost:2375 without TLS on the remote Windows machine
Also I want to note that the I've tried using Windows, Mac, and Linux as the local machine. With Mac and Linux I am able to open a remote session into the Windows machine, but from the Windows local machine I am able to SSH into the remote Windows machine but cannot open a remote connection in VS Code for some reason.
Ok, I was able to get this working using the port/socket forwarding technique. For sake of clarity, I'll use:
local development workstation, local workstation, or just workstation to indicate the computer from which we wish to use VSCode to access Docker containers on ...
the remote Docker host, remote, or just Docker host
Sanity check -- Do you have Docker Desktop installed on both systems? On the local development workstation, you can skip the WSL2 integration, but you'll at least need the client tools, since the VSCode extension uses them.
Steps I took:
I already had Docker with WSL2 integration set up on my main system (which for the purposes of this exercise, became my remote Docker host), along with VSCode, so I knew everything was working there. It sounds like that was your starting point as well.
On another system on the same network (accessed with RDP to make it simple), I already had VSCode installed as well, with the Remote Development Extension Pack. I also have WSL on that system, but only a v1 instance there. Not that WSL on the workstation should be a factor at all for the purposes of this exercise.
I installed Docker Desktop for Windows on that local development workstation.
I also installed the Docker extension for VSCode, since I didn't yet have it on the local development workstation.
On the workstation, I was not yet set up to SSH from PowerShell into my WSL Ubuntu distro on the remote. From PowerShell on the workstation, I generated an ECDSA key (per this and other documents) and added the public key to my authorized_keys on the the remote.
On the workstation, I started the OpenSSH Authentication Service and added the newly created key to the agent (in PowerShell) with ssh-agent add ~\.ssh\id_ecdsa.
I logged out of the workstation and back in so that the path changes were picked up for the Docker desktop install.
I was then able to ssh from Powershell on the local to Ubuntu/WSL on the remote with the port forwarding. Since I'm using the Windows 10 OpenSSH server as a jumphost to my WSL SSH servers, my command looked slightly different (with a -o "ProxyCommand ... mainly), but overall the structure is the same as the one listed in the "SSH Tunnel" doc you linked in your question.
On the remote (manually, not through any integration from the local), I did a basic docker run -it --rm Ubuntu and left it open.
On the local, from PowerShell, I set the DOCKER_HOST environment variable via [System.Environment]::SetEnvironmentVariable("DOCKER_HOST","tcp://localhost:23750").
I was then able to see the remote container using docker ps on the local. I could also docker exec -it containername bash into it remotely.
Of course, the above two steps aren't needed in the long term for VSCode, they were just part of my process to make sure everything was up and running (since, as you might expect, I did have several points at which I failed during this process).
So with that working, it was a simple matter in VSCode to change the Docker extension's DOCKER_HOST setting to tcp://localhost:23750. And voila, I could see all images on the remote as well as attach to them from VSCode.
Other thing(s) to check
I'll add to this list if we find additional reasons why it might not be working, but for now:
You mention that you are starting the Docker Desktop daemon automatically at startup via Task Manager, but you don't mention anything about the WSL2 instance. However, since you are able to ssh into it, I assume you have a way to bring it up as well? My experience has been that, unless the owning user is logged in, WSL terminates any instances after a few seconds, even if a service is running. There's a workaround, I believe, that I can dust off if this is a problem.
I have installed vagrant and virtual box on my Mac. I have created a Windows10 VM and it's configured with winrm.
I am able to run commands on Windows VM through vagrant. However I am not able to see any GUI on the VM.
For example, if I open command prompt in Windows VM and issue command "start chrome.exe", it launches the chrome browser and browser ui is displayed. However if I type the same command through winrm vagrant winrm -c "start chrome.exe", it launches the browser, but ui is not displayed in VM.Same issue happens if I run commands through shell provisioner.
Is there any way, I can run commands from vagrant and the application will be launched in GUI mode in VM?
Is there any way, I can run commands from vagrant and the application will be launched in GUI mode in VM?
No.
From https://msdn.microsoft.com/en-us/library/aa384426(v=vs.85).aspx :
You can use WinRM scripting objects, the WinRM command-line tool, or the Windows Remote Shell command line tool WinRS to obtain management data from local and remote computers ...
winrm is used for Remote Management and does not forward the X window, so no you cannot launch a program like chrome and forward the UI somewhere else.
Your best options to run UI program from your VM :
run from the VM GUI (either by enabling from Vagrantfile or opening the VM from VirtualBox)
running vagrant rdp to login into the VM
The easiest is to run the VM in 'GUI mode' (as opposed to 'headless').
I use VirtualBox from Oracle, which is one of the options easily configured from within your Vagrantfile.
Check out my "Provider-specific configuration" section:
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# # (so we can run the browser)
vb.gui = true
vb.name = "windows10-eval"
# # Customize the amount of memory on the VM:
vb.memory = "2048"
end
When my VM boots I automatically get a GUI which looks exactly as if I was booting a regular Windows machine. This box comes conveniently with chrome already provisioned, but it'd be easy to install it and use it.
Although you can't directly run a GUI app from the WinRM, you can also add a link to your app in the Windows startup so you will ensure the app is ran on system startup.
Add the following in your provisioning script :
mklink C:\Users\vagrant\AppData\Roaming\Microsoft\Windows\"Start Menu"\Programs\Startup\MyApp.link C:\MyApp\\MyApp.exe
shutdown /r /t 1
Context
I am trying to use docker-py to connect to docker-machine on OSX.
I can't simply use the standard Client(base_url='unix://var/run/docker.sock') since docker is running on a docker-machine Virtual Machine, not my local OS.
Instead, I am trying to connect securely to the VM using docker.tls:
from docker import Client
import docker.tls as tls
from os import path
CERTS = path.join(path.expanduser('~'), '.docker', 'machine', 'certs')
tls_config = tls.TLSConfig(
client_cert=(path.join(CERTS, 'cert.pem'), path.join(CERTS,'key.pem')),
ca_cert=path.join(CERTS, 'ca.pem'),
verify=True
#verify=False
)
client = docker.Client(base_url='https://192.168.99.100:2376', tls=tls_config)
Problem
When I try to run this code (running something like print client.containers() on the next line), I get this error:
requests.exceptions.SSLError: hostname '192.168.99.100' doesn't match 'localhost'
I've been trying to follow the github issue on a similar problem with boot2docker, ie. the old version of docker-machine, but I don't know much about how SSL certificates are implemented. I tried adding 192.168.99.100 localhost to the end of my /etc/hosts file as suggested in the github issue, but that did not fix the issue (even after export DOCKER_HOST=tcp://localhost:2376).
Maybe connecting via the certificates is not the way to go for docker-machine, so any answers with alternative methods of connecting to a particular docker-machine via docker-py are acceptable too.
UPDATE
Seems like v0.5.2 of docker-machine tries to solve this via the --tls-san flag for the create command. Need to verify but installation via brew is still giving v0.5.1, so I'll have to install manually.
Looks like with the Docker-py v1.8.0 you can connect to Docker-machine like below;
import docker
client = docker.from_env(assert_hostname=False)
print client.version()
See the doc here
I installed docker-machine v0.5.2 as detailed in the release on github. Then I just had to create a new machine as follows:
$ docker-machine create -d virtualbox --tls-san <hostname> <machine-name>
Then I added <hostname> <machine-ip> to /etc/hosts. The code worked after that
from docker import Client
import docker.tls as tls
from os import path
CERTS = path.join(path.expanduser('~'), '.docker', 'machine', 'machines', <machine-name>)
tls_config = tls.TLSConfig(
client_cert=(path.join(CERTS, 'cert.pem'), path.join(CERTS,'key.pem')),
ca_cert=path.join(CERTS, 'ca.pem'),
verify=True
)
client = docker.Client(base_url='https://<machine-ip>:2376', tls=tls_config)
where I replaced <machine-name> in the CERTS path and replaced <machine-ip> in the base_url.
Our vagrant box takes ~1h to provision thus when vagrant up is run for the first time, at the very end of provisioning process I would like to package the box to an image in a local folder so it can be used as a base box next time it needs to be rebuilt. I'm using vagrant-triggers plugin to place the code right at the end of :up process.
Relevant (shortened) Vagrantfile:
pre_built_box_file_name = 'image.vagrant'
pre_built_box_path = 'file://' + File.join(Dir.pwd, pre_built_box_file_name)
pre_built_box_exists = File.file?(pre_built_box_path)
Vagrant.configure(2) do |config|
config.vm.box = 'ubuntu/trusty64'
config.vm.box_url = pre_built_box_path if pre_built_box_exists
config.trigger.after :up do
if not pre_built_box_exists
system("echo 'Building gett vagrant image for re-use...'; vagrant halt; vagrant package --output #{pre_built_box_file_name}; vagrant up;")
end
end
end
The problem is that vagrant locks the machine while the current (vagrant up) process is running:
An action 'halt' was attempted on the machine 'gett',
but another process is already executing an action on the machine.
Vagrant locks each machine for access by only one process at a time.
Please wait until the other Vagrant process finishes modifying this
machine, then try again.
I understand the dangers of two processes provisioning or modifying the machine at one given time, but this is a special case where I'm certain the provisioning has completed.
How can I manually "unlock" vagrant machine during provisioning so I can run vagrant halt; vagrant package; vagrant up; from within config.trigger.after :up?
Or is there at least a way to start vagrant up without locking the machine?
vagrant
This issue has been fixed in GH #3664 (2015). If this still happening, probably it's related to plugins (such as AWS). So try without plugins.
vagrant-aws
If you're using AWS, then follow this bug/feature report: #428 - Unable to ssh into instance during provisioning, which is currently pending.
However there is a pull request which fixes the issue:
Allow status and ssh to run without a lock #457
So apply the fix manually, or waits until it's fixed in the next release.
In case you've got this error related to machines which aren't valid, then try running the vagrant global-status --prune command.
Definitely a bit more of a hack than a solution, but I'd rather a hack than nothing.
I ran into this issue and nothing that was suggested here was working for me. Even though this is 6 years old, it's what came up on a google (along with precious little else), I thought I'd share what solved it for me in case anyone else lands here.
My Setup
I'm using vagrant with ansible-local provisioner on a local virtualbox VM, which provisions remote AWS EC2 instances. (i.e. the ansible-local runs on the virtualbox instance, vagrant provisions the virtualbox instance, ansible handles the cloud). This setup is largely because my host OS is Windows and it's a little easier to take Microsoft out of the equation on this one.
My Mistake
Ran an ansible shell task with a command that doesn't terminate without user input (and did not run it with the & to run in the background).
My Frustration
Even in the linux subsystem, trying a ps aux | grep ruby or ps aux | grep vagrant was unhelpful because the PID would change every time. Probably a reason for this, likely has something to do with how the subsystem works, but I don't know what that reason is.
My Solution
Just kill the AWS EC2 instances manually. In the console, in the CLI, pick your flavor. Your terminal where you were running vagrant provision or vagrant up should then finally complete and spit out the summary output, even if you ctrl + C'd out of the command.
Hoping this helps someone!
I have a question for all you Vagrants and TDD'ers out there,
How can I make a Vagrant Ubuntu VM send autotest / guard notifications to a Windows 7 or OS X host?
Details:
I'm trying to build my ultimate road-warrior development environment, so that I can jump between computers, OS's, and countries without worrying about reconfiguring my environment all the time. I'm using Vagrant to make disposable VMs that mirror our production environment, and letting me jump from my work computer (Windows 7) to my home computer (OS X) with minimal hassle.
I am trying to configure my Vagrant Ubuntu VM for use with Test-Driven Development (TDD), and make use of autotest / guard utilities to automatically run my tests on save, and display the results as desktop notifications on the host. I run the Vagrant VM in headless mode, so there is no desktop to receive the notifications, so I need them forwarded to the host.
I have a couple of leads, like using Growl's remote notifications (for receiving, but I don't know how to send them from the Ubuntu VM), or hacking Growl, but I thought that this problem must have been addressed by others out there.
Found a way to make it work on Windows 8 host and Ubuntu vagrant box:
Install gem ruby_gntp in rails.
Add to Guardfile:
`notification :gntp, :sticky => false, :host => '192.168.0.77', :port => '23053', :password => 'yourpassword'`
192.168.0.77 is the IP of host machine, you can find it by running ipconfig.
23053 - standard port for growl.
Install growlforwindows and set network subscription to Vagrant box( help)
with host 10.0.2.2, port 23053 and password yourpassword
10.0.2.2 - default IP of vagrant box gataway, you can confirm it by running netstat -rn in vagrant ssh.
Finally you can run guard -p and start tests.
If you get error 'refused', then it's wrong IP in Guardfile, for example this happens if I set gateway IP of windows machine instead of local IP.
If you get error 'Notiffany register filed', then it's wrong ip in growlforwindows.
Well, why don't just forward all the tests' output to a file, then connect via SSH and see the results?
Basically tail -f command comes handy here.