I'm playing with boot2docker (docker 1.6) on windows 8.1. I wanted to make myself machine container to play with ruby and I want to be able to connect to rails server from my windows host. To start with small steps first I want to connect to my container from my boot2docker VM. I attach my docker file below, it builds without a problem and I can run a container from it. I do it like so:
docker run -it -p 3000:3000 3564860f7afd /bin/bash
Then in this container I say:
cd ~/myapp && bundle exec rails server -d
And to see if everything is working I do:
~/myapp$ sudo apt-get install wget && wget localhost:3000
and I get http 500, which is ok, I just wanted to check if server is running. Then I exit using ctrl+p, ctrl+q. But then on boot2docker machine I do agin
wget localhost:3000
and get
Connecting to localhost:3000 (127.0.0.1:3000)
wget: error getting response: Connection reset by peer
So it seems like port 3000 is not correctly forwarded to boot2docker VM. What have I done wrong? What did I miss? I googled extensively and tried couple of things like explicitly exposing port from dockerfile of or adding -P switch to run, but I always end up the same way - it's not working.
Any help will be greatly appreciated.
UPDATE 02.05.2015
I have also tried things described in comment from Markus W Mahlberg and reponse from VonC. My VM configuration seems to be ok, I also checked in GUI of VirtualBox and it seems fine. Some more info: When I start
boot2docker ssh -vnNTL 3000:localhost:3000
and then open localhost:3000 on my windows host I see in trace logs in boot2docker console, they look like this:
debug1: channel 1: free: direct-tcpip: listening port 3000 for localhost port 3000, connect from 127.0.0.1 port 50512 to 127.0.0.1 port 3000, nchannels 3
Chrome tells me that the response was empty. From checking the logs on container I know that request never got to it.
End of update
Update 03.05.2015
I thing that my problem have not so much to do with boot2docker or docker as with my computer configuration. I've been over my docker/boot2docker configuration so many times, that it is rather unlikely that I've made a mistake there.
Desperately I've reinstalled boot2docker and VirtualBox, still no effects. Any ideas how to debug what can be wrong with my configuration? Only other idea I have is to try doing the same on another machine. But even if this works my original problem is no less annoying.
End of update
Here is my dockerfile:
FROM ubuntu
MAINTAINER anonymous <anonymous#localhost.com>
LABEL Description="Ruby container"
# based on https://gorails.com/setup/ubuntu/14.10
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& groupadd anonymous \
&& useradd anonymous -m -g anonymous -g sudo
ENV HOME /home/anonymous
USER anonymous
RUN git clone https://github.com/sstephenson/rbenv.git ~/.rbenv
RUN echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
RUN echo 'eval "$(rbenv init -)"' >> ~/.bashrc
RUN exec $SHELL
RUN git clone https://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
RUN echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
RUN exec $SHELL
RUN git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash
ENV PATH "$HOME/.rbenv/bin:$HOME/.rbenv/plugins/ruby-build/bin:$PATH"
RUN rbenv install 2.2.1
RUN rbenv global 2.2.1
ENV PATH "$HOME/.rbenv/shims:$PATH"
RUN echo 'gem: --no-ri --no-rdoc' > ~/.gemrc
RUN gem install bundler
RUN git config --global color.ui true
RUN git config --global user.name "mindriven"
RUN git config --global user.email "3dcreator.pl#gmail.com"
RUN ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa -C "3dcreator.pl#gmail.com"
RUN sudo apt-get -qy install software-properties-common python-software-properties
RUN sudo add-apt-repository ppa:chris-lea/node.js
RUN sudo apt-get -y install nodejs
RUN gem install rails -v 4.2.0
RUN ~/.rbenv/bin/rbenv rehash
RUN rails -v
RUN sudo apt-get -qy install mysql-server mysql-client
RUN sudo apt-get install libmysqlclient-dev
RUN rails new ~/myapp -d mysql
RUN sudo /etc/init.d/mysql start && cd ~/myapp && rake db:create
See Boot2docker workarounds:
You can use VBoxManage.exe commands to open those ports on the boot2docker VM level, in order for your actual VM host to access them.
By default, only the port 2222 is open, for boot2docker ssh to work and open an interactive ssh boot2docker session.
Just make sure VirtualBox is in your PATH.
VBoxManage modifyvm: works when the boot2docker VM isn't started yet, or after a boot2docker stop,
VBoxManage controlvm: works when the boot2docker VM is running, after a boot2docker start.
Let's say your Docker container exposes the port 8000 and you want access it from your other computers on your LAN. You can do it temporarily, using ssh:
Run following command (and keep it open):
$ boot2docker ssh -vnNTL 8000:localhost:8000
or you can set up a permanent VirtualBox NAT Port forwarding:
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port8000,tcp,,8000,,8000";
If the vm is already running, you should run this other command:
$ VBoxManage controlvm "boot2docker-vm" natpf1 "tcp-port8000,tcp,,8000,,8000";
Now you can access your container from your host machine under
localhost:8000
That way, you don't have to mess around with the VirtualBox GUI, selecting the computer called boot2docker-vm from the list on the left, choosing Settings from the Machine menu (or press Command-S on a Mac), selecting the Network icon at the top, and finally clicking the Port Forwarding button.
boot2docker on Windows (and OSX) is running a VirtualBox VM with Linux in it. By default it exposes only the ports necessary to ssh into the VM. You'll need to modify the VM to get it to expose more ports.
Adding ports to the VM is more about configuring VirtualBox and less about boot2docker (it is a property of the VM, not the software running inside it). Please see the VirtualBox documentation for "port forwarding" and other network configuration. https://www.virtualbox.org/manual/ch06.html
Yes you need to open the ports in the Virtualbox machines:
enter image description here
Related
I am trying to create a consul cluster using vagrant and virtual box.
While trying to download consul, wget is unable to establish SSL connection.
Below are the logs from vagrant. wget is working fine for other downloads and I verified that the consul download link is working too. Curl works fine on this link too. But, weirdly if I use CURl in vagrant provision, it simply is not downloading (no logs) at all.
Can some one help me with this weird wget issue? I tried upgrading 'wget' too.
==> consulredis1: --2017-01-11 02:17:04-- https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip
==> consulredis1: Resolving releases.hashicorp.com (releases.hashicorp.com)...
==> consulredis1: 151.101.65.183
==> consulredis1: ,
==> consulredis1: 151.101.129.183
==> consulredis1: ,
==> consulredis1: 151.101.193.183
==> consulredis1: , ...
==> consulredis1: Connecting to releases.hashicorp.com (releases.hashicorp.com)|151.101.65.183|:443...
==> consulredis1: connected.
==> consulredis1: Unable to establish SSL connection.
Here is my provisioning script
#!/bin/bash
# Step 1 - Get the necessary utilities and install them.
sudo apt-get update
sudo apt-get install -y unzip curl wget
sudo apt-get install -y make gcc build-essential
#apt-get install
# Step 2 - Copy the init script to the /etc/init folder.
cp /vagrant/consul.conf /etc/init/consul.conf
# Step 3 - Get the Consul Zip file and extract it.
cd /usr/local/bin
wget http://download.redis.io/redis-stable.tar.gz
curl -k http://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip
unzip *.zip
rm *.zip
# Step 4 - Make the Consul directory.
sudo mkdir -p /etc/consul.d
sudo chmod a+w /etc/consul.d
sudo mkdir /var/consul
# Step 5 - Copy the server configuration.
cp $1 /etc/consul.d/config.json
# Step 6 - Start Consul
exec consul agent -config-file=/etc/consul.d/config.json
My Vagrantfile contents
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "hashicorp/precise64"
config.vm.define "consulredis1" do |consulredis1|
config.vm.provision "shell" do |s|
s.path = "provision.sh"
s.args = ["/vagrant/node1/config.json","/vagrant/node1/redis-cluster-1.init.conf","/vagrant/node1/redis-cluster-1.conf"]
end
consulredis1.vm.hostname = "consulredis1"
consulredis1.vm.network "private_network", ip: "172.20.20.10"
end
end
It appears that Hashicorp, the maintainer of Consul, recently changed its download servers to require TLS v1.2 exclusively:
Fastly (who are fronting the releases.hashicorp.com) confirmed that this issue is caused by a change to the Hashicorp endpoint who recently went to support TLS v 1.2 only and removed support for earlier versions (1.0 and 1.1). Unfortunately, the current cURL and related libraries (e.g. OpenSSL) versions on our Precise containers don't support TLS v 1.2.
(notes here, including someone from the Hashicorp weighing in)
It appears that the simplest fix, as noted elsewhere in this question, is to upgrade the version of Ubuntu. If you're not able to, you may be able to force wget to choose a specific TLS version (1.2):
wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip --secure-protocol=TLSv1_2
You need to upgrade some libraries (including libssl) so the best is to include a apt-get dist-upgrade -y)
#!/bin/bash
# Step 1 - Get the necessary utilities and install them.
sudo apt-get update
sudo apt-get install -y unzip curl wget
sudo apt-get install -y make gcc build-essential
sudo apt-get dist-upgrade -y
#apt-get install
# Step 2 - Copy the init script to the /etc/init folder.
cp /vagrant/consul.conf /etc/init/consul.conf
# Step 3 - Get the Consul Zip file and extract it.
cd /usr/local/bin
wget http://download.redis.io/redis-stable.tar.gz -nc -nv
wget http://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip -nc -nv
unzip *.zip
rm *.zip
# Step 4 - Make the Consul directory.
sudo mkdir -p /etc/consul.d
sudo chmod a+w /etc/consul.d
sudo mkdir /var/consul
# Step 5 - Copy the server configuration.
cp $1 /etc/consul.d/config.json
# Step 6 - Start Consul
exec consul agent -config-file=/etc/consul.d/config.json
There seems to be issue with the distribution from "hashicorp/precise64".
I simply switched to using "ubuntu/trusty64" and wget worked fine.
#config.vm.box = "hashicorp/precise64"
config.vm.box = "ubuntu/trusty64"
I was trying to install ejabberd with applying the tutorials in many site to my VM which is ubuntu but I am stuck in the beginning. After I wrote
sudo apt-get update
sudo apt-get -y install ejabberd
it installs ejabberd. But when I try to write the following
ejabberdctl register admin localhost mypassword
it says ejabberdctl not found. I also tried to restart it with but it is still same.
sudo service ejabberd restart
Note: I did not install erlang seperately. Can it be the problem?
Try sudo ejabberdctl,
if it didn't work, Do:
sudo updatedb
sudo locate ejabberdctl
check if the output is in your $PATH variable.
I have successfully installed Vagrant along with some boxes on my Windows PC. I got to say it works awesome, creating and destroying VM's with different configurations on the fly.
The only problem I'm facing now is that I want to install composer. But composer requires you to point to the php.exe to do so. I don't want to install PHP on my computer, otherwhise there is no point using Vagrant, right. How do I tackle this problem?
I've seen some articles about using Puppet, but I couldn't make much sense out of them.
Thanks in advance.
You just need to install PHP (and curl) in your vagrant box. For instance, execute vagrant ssh to get SSH access to your box and execute the following commands:
$ sudo apt-get install -y php5-cli curl
$ curl -Ss https://getcomposer.org/installer | php
$ sudo mv composer.phar /usr/bin/composer
Now you're ready to use the composer command in your vagrant box.
You can improve this by making this part of provisioning, the step where a box is set up when running vagrant up. To do this, put the above commands in a shell file (for instance project/vagrant/provision.sh):
sudo apt-get install -y php5-cli curl > /dev/null
curl -Ss https://getcomposer.org/installer | php > /dev/null
sudo mv composer.phar /usr/bin/composer
Now, configure this shell file as a provision step in your VagrantFile:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/trusty64"
# configure the shell file as a provision step:
config.vm.provision :shell, path: "vagrant/provision.sh"
end
Now, when running vagrant init, the shell file is executed and php & composer are installed.
You can also choose to use a box with php and composer pre-installed, like laravel/homestead.
There is also a vagrant box with composer pre-installed. Here is the Github for this box: https://github.com/Swader/homestead_improved.
With Git Bash for windows, navigate to the folder where /homestead_improved was installed.
Run vagrant up;, vagrant ssh to get inside the VM machine.
Once inside the virtual machine cd inside the /Code dir. You can now use composer, for example composer global require "laravel/installer=~1.1" to install the Laravel installer.
Commands to be followed when you are in vagrant homestead in order to update the composer:
vagrant ssh
cd code (where my laravel projects are)
composer selfUpdate --2 [which means composer selfUpdate --versionnumber]
I have a vagrant provisioning script which succeeds -- I can see the output from the logs and my dependencies are being installed, my directories and files are being created and copied over, etc. but when I vagrant ssh into the VM none of the folders, files, env variables, and installations are there.
Edit: git, curl, etc. work, but gvm, go, and $GOPATH etc do not, and my go directory does not exist
I'm confident the provisioning works correctly because I can run my web server from the script and confirm the application is being served.
Is this just the way Vagrant is set up? What's the point of vagrant ssh if so?
I'm running the default "hashicorp/precise32" box, Ubuntu 12.04, default provider.
Shell script
#! /bin/bash
echo "Provisioning virtual machine"
sudo apt-get update
echo "Installing Dependencies"
# Base dependencies: curl git
# Dependencies for gvm: make bison
sudo apt-get install curl git make bison -y 2> /dev/null
# Dependencies for add-apt-repository: python-software-properties software-properties-common
sudo apt-get install python-software-properties software-properties-common -y 2> /dev/null
# This allows us to get an updated version of git, which we need for gvm
sudo add-apt-repository ppa:git-core/ppa -y 2> /dev/null
sudo apt-get update
sudo apt-get install git -y 2> /dev/null
echo "Installing GVM"
bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
source ~/.gvm/scripts/gvm
echo "Installing and configuring Go"
gvm install go1.4
gvm use go1.4 --default
mkdir -p ~/go/{bin,pkg,src}
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
echo "Installing Nginx"
sudo apt-get install nginx -y 2> /dev/null
Vagrantfile
`# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "hashicorp/precise32"
config.vm.provision :shell, path: "init.sh"
config.vm.network :forwarded_port, host: 4000, guest: 8080
end`
The Vagrant provisioning script is running as root, so ~ refers to /root instead of /home/vagrant.
Options to resolve are su -c "*your command*" vagrant or using absolute paths, which is probably the best approach.
Similar issue: Why is my Vagrant bootstrap file not modifying bash_login?
And more detail in my answer here: Vagrant - Rails Not Installed
When starting the latest (Okt 2014) Hadoop with start-dfs.sh we are seeing:
connect to host localhost port 22: Connection refused when running
Install openssh server.
For Ubuntu command is :
sudo apt-get install openssh-server
In hadoop-env.sh file ( present in /etc/hadoop) add the following line :
export HADOOP_SSH_OPTS="-p 22"
Configure "HADOOP_SSH_OPTS" in your hadoop-env.sh, to add any SSH CLI
options you need to always be present when the Hadoop scripts use SSH.
A line like 'export HADOOP_SSH_OPTS="-p "' perhaps would be what
you are looking for.
Source: Interweb
Install and start openssh server. Here is the command for CentOS:
Install Open SSH server:
sudo yum -y install openssh-server openssh-clients
Start SSH server:
sudo service sshd start
OS
On Ubuntu 20.04.1 LTS
Install OpenSSH
sudo apt install openssh-server openssh-client -y
Start SSH
sudo service ssh start