I am trying to create a two VM from the centos/7 box with the following Vagrantfile (extract below):
config.vm.define "buildmaster" do |d|
d.vm.hostname = "buildmaster"
d.vm.network "private_network", ip: "10.217.65.200"
d.vm.provision :shell, path: "scripts/install_ansible.sh"
d.vm.provider "virtualbox" do |v|
v.name = "buildmaster"
end
end
config.vm.define "vm#{1}" do |d|
d.vm.hostname = "vm#{1}"
d.vm.network "private_network", ip: "10.217.65.125"
d.vm.provider "virtualbox" do |v|
v.name = "vm#{1}"
end
end
The first VM gets the assigned IP which I can see with:
ip addr show
[vagrant#buildmaster ~]$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:c3:c0:db brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 78309sec preferred_lft 78309sec
inet6 fe80::5054:ff:fec3:c0db/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:b5:1f:48 brd ff:ff:ff:ff:ff:ff
inet 10.217.65.200/24 brd 10.217.65.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feb5:1f48/64 scope link
valid_lft forever preferred_lft forever
The second, however, doesn't get the assigned IP. I tried different IPs and ways of passing the IP, as a string, from a vector with string values, etc.
[vagrant#vm1 ~]$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:c3:c0:db brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 86367sec preferred_lft 86367sec
inet6 fe80::5054:ff:fec3:c0db/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:33:93:fa brd ff:ff:ff:ff:ff:ff
inet6 fe80::a00:27ff:fe33:93fa/64 scope link
valid_lft forever preferred_lft forever
Anybody else running into this problem and that has found a solution?
For anyone running into the same problem. This is a problem with Vagrant and CentOS based boxes, yes Atomic Host as well.
I reported the issue here: https://github.com/mitchellh/vagrant/issues/7711 and even though it says it is fixed, it is not in my recent experience with Vagrant 1.9.0. I still need to add this to the Vagrantfile when using CentOS or Atomic Host:
# Restart networking as a workaround for configured ip not showing up
$network_workaround = <<-NETWORK_WORKAROUND
rm /etc/sysconfig/network-scripts/ifcfg-eth0
systemctl restart network
NETWORK_WORKAROUND
config.vm.provision "network_workaround", type: "shell", privileged: true, inline: $network_workaround
Hope this helps.
Related
I am a newbie to openstack (deployed using kolla-ansible) and have created two instances both are ubuntu 20.04 VMs. I am able to ping and ssh them from the host machine (192.168.211.133) and vice versa. However instances are unable to access internet. The virtual router is also unable to access internet:
Configuration of one of the machine is below;
root#kypo-virtual-machine:/etc/apt/sources.list.d# ip netns ls
qrouter-caca1d42-86b4-42a2-b591-ec7a90437029 (id: 1)
qdhcp-0ec41857-9420-4322-9fef-e332c034e98e (id: 0)
root#kypo-virtual-machine:/etc/apt/sources.list.d# ip netns e qrouter-caca1d42-86b4-42a2-b591-ec7a90437029 route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.211.1 0.0.0.0 UG 0 0 0 qg-f31a26b7-25
192.168.64.0 0.0.0.0 255.255.192.0 U 0 0 0 qr-e5c8842c-c2
192.168.211.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-f31a26b7-25
Netplan of instance shows:
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
ens3:
dhcp4: true
match:
macaddress: fa:16:3e:a7:9d:70
mtu: 1450
set-name: ens3
And IP sheme is:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
fq_codel state UP group default qlen 1000
link/ether fa:16:3e:a7:9d:70 brd ff:ff:ff:ff:ff:ff
inet 192.168.65.39/18 brd 192.168.127.255 scope global dynamic ens3
valid_lft 85719sec preferred_lft 85719sec
inet6 fe80::f816:3eff:fea7:9d70/64 scope link
valid_lft forever preferred_lft forever
From Horizon
IP Addresses
kypo-base-net
192.168.65.39, 192.168.211.250
Security Groups
kypo-base-proxy-sg
ALLOW IPv6 to ::/0
ALLOW IPv4 icmp from 0.0.0.0/0
ALLOW IPv4 22/tcp from 0.0.0.0/0
ALLOW IPv4 udp from b9904736-6d8a
ALLOW IPv4 tcp from b9904736-6d8a
ALLOW IPv4 tcp from 73ca626b-7cfb
ALLOW IPv4 udp from 73ca626b-7cfb
ALLOW IPv4 to 0.0.0.0/0
I was able to resolve the issue by pinpointing that the gateway used by the virtual router (192.168.211.1) was different form the one used by my host VM (192.168.211.2).
kypo#kypo-virtual-machine:/etc/kolla$ ip route show
default via 192.168.211.2 dev ens33 proto dhcp
src 192.168.211.133 metric 100
I modify the gateway;
openstack subnet set --gateway 192.168.211.2 public-subnet
And now my instances are able to access internet.
The main reason for this configuration issue was while creating the subnet I used auto for --gateway option and obviously it didn't pick the correct gateway.
I am trying can bus support for Yocto with beagleboneblack.
I did kernel config by bitbake -c menuconfig virtual/kernel and add following driver to kernel.
Raw CAN Protocol
Broacast Manager CAN Protocol
CAN Gateway/Router
Platform CAN drivers with Netlink support
Can bit-timing calculation
TI High End CAN Controller
And add IMAGE_INSTALL_append = " can-utils iproute2" to local.conf.
When my yocto boot up, serial console seems to show
[ 1.239593] can: controller area network core (rev 20170425 abi 9)
[ 1.246828] NET: Registered protocol family 29
[ 1.251438] can: raw protocol (rev 20170425)
[ 1.255758] can: broadcast manager protocol (rev 20170425 t)
[ 1.261517] can: netlink gateway (rev 20190810) max_hops=1
So, i think that kernel have can driver and socketcan.
But there is no can device.
root#beaglebone:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 78:a5:04:b4:18:cf brd ff:ff:ff:ff:ff:ff
inet 192.168.100.19/24 brd 192.168.100.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 240b:251:520:5b00:7aa5:4ff:feb4:18cf/64 scope global dynamic mngtmpaddr
valid_lft 2591946sec preferred_lft 604746sec
inet6 fe80::7aa5:4ff:feb4:18cf/64 scope link
valid_lft forever preferred_lft forever
3: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
Could you tell me how can i find can device in ip a?
BR, Soramame
AM335X has Bosch C_CAN/D_CAN controller but not TI High End CAN Controller.
So i changed kernel config from bitbake -c menuconfig virtual/kernel.
And I modified device tree and rebuild kernel.
Then, I could find can0 and can1.
I am trying to run my docker on Oracle Cloud instance.
In the past (dedicated server with public IP), I used to run this command to bind my container: docker run -d -p 80:80 image
But now, it doesn't work anymore.
I checked my network interfaces, and I am getting confused, because I cannot see my public IP. How to fix this issue?
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
link/ether 02:00:17:00:8e:77 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.4/24 brd 10.0.0.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::17ff:fe00:8e77/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:cc:94:7a:d9 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:ccff:fe94:7ad9/64 scope link
valid_lft forever preferred_lft forever
I can't give you a complete answer without knowing what exactly is your network setup.
However, all information can be found here.
To summarize, for instances in Oracle Cloud Infrastructure to be accessible from the outside there is a set of prerequisites:
Create a VCN and a public subnet
Create an Internet Gateway in your VCN
Add that internet Gateway to the subnet route table
Create an instance (only the private IP will be visible inside of your instance
in your case, it is 10.0.0.4.
Assign a public IP to your instance (in reality OCI links the public IP to the
private one and not to the instance itself).
If you already have a public subnet you should have seen an "assign public IP" checkbox while creating the instance.
Please feel free to add more details about your setup.
I've been trying to set up a home "lab" so I can further increase my fluency in Chef. While doing this, I've found an area of frustration I'm looking both to understand (likely a VBox cause) and remedy.
The goal is to use my Arch desktop (which hosts VBox) as the workstation for Chef-dk (installed)
Create two Ubuntu VMs (setup and configured):
- chefsvr (hosts the Chef server)
- chefnode (the node to apply Chef recipes on and manage)
Having spent a while trying to get this all up and running, I noticed that this fails. The error is:
/opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/net-ssh-4.1.0/lib/net/ssh/transport/session.rb:90:in `rescue in initialize': Net::SSH::ConnectionTimeout (Net::SSH::ConnectionTimeout)
from /opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/net-ssh-4.1.0/lib/net/ssh/transport/session.rb:57:in `initialize'
from /opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/net-ssh-4.1.0/lib/net/ssh.rb:233:in `new'
from /opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/net-ssh-4.1.0/lib/net/ssh.rb:233:in `start'
from /opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/net-ssh-multi-1.2.1/lib/net/ssh/multi/server.rb:186:in `new_session'
from /opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/net-ssh-multi-1.2.1/lib/net/ssh/multi/session.rb:488:in `next_session'
from /opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/net-ssh-multi-1.2.1/lib/net/ssh/multi/server.rb:138:in `session'
from /opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/net-ssh-multi-1.2.1/lib/net/ssh/multi/session_actions.rb:36:in `block (2 levels) in sessions'
from /opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
In trying to debug the above, I've added a 2nd nick to the VM's so that the first NIC is now a Host Only Adapter and the 2nd a Bridge Adapter as I want to use my internal DNS server. Traffic seems to pass freely and SSH works all around outside of Chef.
No joy after adding the 2nd adapters.
My next effort was to spin up a 3rd VM and try that as a management node. Instead of Arch I used Ubuntu because I wanted to be as close to the Chef "How-to" as possible. After spinning up and configuring this workstation, everything works as expected.
Any thoughts on this are greatly appreciated. I'd love to use all the tools on my Arch and not be working totally in VMs.
My guess is that there's some networking adjustment I need to make with VirtualBox, but so far I've been unable to identify any.
Many thanks.
Current Versions (although many others tried historically):
VBox 5.2.2r119230
Chef Development Kit Version: 2.4.19
chef-client version: 13.6.4
berks version: 6.3.1
kitchen version: 1.19.2
inspec version: 1.46.2
Additional Info:
Specifics:
Physical IP of Host: 192.168.1.98/24
Guest Bridge Adapter Network: 192.168.1.0/24
Guest Host-Only Adapter Network: 192.168.56.0/24
The Chef nodes have an address in each.
Example:
ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:6a:49:6a brd ff:ff:ff:ff:ff:ff
inet 192.168.56.102/24 brd 192.168.56.255 scope global dynamic enp0s3
valid_lft 1035sec preferred_lft 1035secknife bootstrap chefnode --ssh-user mtompkins --sudo --identity-file ~/.ssh/id_rsa --node-name chefnode --run-list 'recipe[learn_chef_httpd]'
inet6 fe80::a00:27ff:fe6a:496a/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:25:13:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.174/24 brd 192.168.1.255 scope global dynamic enp0s8
valid_lft 73642sec preferred_lft 73642sec
inet6 fe80::a00:27ff:fe25:1343/64 scope link
valid_lft forever preferred_lft forever
Add'l #2
Tried bypassing DNS by static entries in the host file so trffic would route on the Host-Only subnet. Traffic flows correctly bypassing the bridge but no improvement on trying to bootstrap a node.
I am trying two create two virtual machine via one Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.box_version = "1707.01"
config.vm.define "inf-vm-01" do |node|
config.vm.hostname = "inf-vm-01"
config.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)", ip: "192.168.1.121"
end
config.vm.define "inf-vm-02" do |node|
config.vm.hostname = "inf-vm-02"
config.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)", ip: "192.168.1.122"
end
end
As you can see, I would like to build bridge between each guest machine and my host machine. The problem is that the second virtual virtual machine has an extra bridge. This is the output of ip addr:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:ad:a0:96 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 86303sec preferred_lft 86303sec
inet6 fe80::5054:ff:fead:a096/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:1b:8e:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.1.121/24 brd 192.168.1.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe1b:8eeb/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:e1:d5:bc brd ff:ff:ff:ff:ff:ff
inet 192.168.1.122/24 brd 192.168.1.255 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fee1:d5bc/64 scope link
valid_lft forever preferred_lft forever
What is wrong with my vagrant file?
You have a wrong Vagrantfile, you're using config.vm.network within a block so its valid for the file. You should write like this (notice how I am using the node1 and node2 variable inside the block)
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.box_version = "1707.01"
config.vm.define "inf-vm-01" do |node1|
node1.vm.hostname = "inf-vm-01"
node1.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)", ip: "192.168.1.121"
end
config.vm.define "inf-vm-02" do |node2|
node2.vm.hostname = "inf-vm-02"
node2.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)", ip: "192.168.1.122"
end
end
You can also read the vagrant doc, specifically the Defining Multiple Machines chapter