How to connect a bridge to a tunnel with netplan? - ubuntu-20.04

Well, before I asked this question, I searched a lot and couldn't find a proper answer (or the question was wrongly asked).
I want to connect a bridge to a GRE tunnel on netplan.
I can successfully do the connection, but the routing table doesn't get updated correctly. I am forced to add the route manually for it to work.
So, here's my netplan setup:
network:
version: 2
ethernets:
enp1s0f0:
dhcp4: no
accept-ra: false
addresses:
- 192.168.0.100/24
routes:
- to: default
via: 192.168.0.254
match:
macaddress: d8:5e:d3:43:cd:ae
set-name: enp1s0f0
nameservers:
addresses:
- 1.1.1.1
- 1.0.0.1
- 2606:4700:4700::1111
- 2606:4700:4700::1001
tunnels:
gre1:
mode: gre
remote: 192.168.100.150
local: 192.168.0.100
bridges:
br1:
dhcp4: false
dhcp6: false
optional: true
interfaces: [ gre1 ]
addresses:
- 172.16.20.2/30
routes:
- to: default
via: 172.16.20.1
scope: link
table: 100
routing-policy:
- from: 172.16.20.0/30
table: 100
With the above configuration, the gre1 has a state UNKNOWN and the br1 state is DOWN.
If I try to ping 172.16.20.1 it fails.
But, if I run:
ip route add 172.16.20.0/30 dev gre1
I can ping without any issues.
Anyone has a clue on how to solve this riddle?

Related

How to get netplan to replace sections instead of merge?

I am trying so set netplan so the yaml files only contain portions of the configuration.
The idea is to use a specific config file when a machine should use a specific DNS.
In order to do that, I have 2 yaml config files: /etc/netplan/01-netcfg.yaml and /etc/netplan/02-dns.yaml'
Their content is really simple.
/etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: true
dhcp6: false
optional: true
nameservers:
addresses: [4.2.2.1, 4.2.2.2, 208.67.220.220]
and /etc/netplan/02-dns.yaml:
network:
version: 2
renderer: networkd
ethernets:
eth0:
nameservers:
addresses: [1.1.1.1]
When I apply the configuration and check the DNS configuration, I get the following:
# systemd-resolve --status
Global
[...]
Current DNS Server: 4.2.2.1
DNS Servers: 4.2.2.1
4.2.2.2
208.67.220.220
[...]
Link 2 (eth0)
[...]
Current DNS Server: 4.2.2.1
DNS Servers: 4.2.2.1
4.2.2.2
208.67.220.220
1.1.1.1
10.0.2.3
So it appears that the configurations got 'cumulated': the DNS from the 02xxx file got added to the DNS from the 01xxx file (which makes sense...).
How to I get net plan to 'replace' instead of 'merge'?

Homestead ERR_SOCKET_NOT_CONNECTED After Mac OS Update

I recently updated to Big Sur on my Mac and haven't been able to access any of my VMs as web pages since. I've destroyed and rebuilt this VM using Homestead's expected installation process.
VirtualBox 6.1.28
Latest Homestead version from release branch
Vagrant 2.2.18
This is the error I'm seeing
After long hours spent researching this loads of people seem to resolve it by adding the site to their hosts file. To confirm I have added this to hosts:
Error when viewing the URL without site in hosts
Update: I'm able to view the VM if I go to http://localhost:8000/. Going to http://192.168.10.10 doesn't work.
From the vagrant box using curl 192.168.10.10 produces the expected HTML output of that page. So does curl localhost:8000 from my machine. If I try curl 192.168.10.10 from my machine I get curl: (55) getpeername() failed with errno 22: Invalid argument.
I've tried every other network configuration within Virtualbox and using NAT is the only one that allows the SSH connection. It seems requests aren't making it to the VirtualBox.. probably because there's an error stating the socket isn't connected.
Socket stats seems to show it's listening on port 80
As of now I have destroyed and rebuilt the box again, so it is as close to an expected installation that anyone should have.
p.s. "site.test" is a placeholder for the actual name.
Here is my Homestead.yaml: https://pastebin.com/qhPdWCNv
---
ip: "192.168.10.10"
memory: 2048
cpus: 2
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: ~/Sites
to: /home/vagrant/code/
sites:
- map: site.test
to: /home/vagrant/code/site/public
php: "7.4"
databases:
- homestead
features:
- mysql: true
- mariadb: false
- postgresql: false
- ohmyzsh: false
- webdriver: false
services:
- enabled:
- "mysql"
# - disabled:
# - "postgresql#11-main"
#ports:
# - send: 33060 # MySQL/MariaDB
# to: 3306
# - send: 4040
# to: 4040
# - send: 54320 # PostgreSQL
# to: 5432
# - send: 8025 # Mailhog
# to: 8025
# - send: 9600
# to: 9600
# - send: 27017
# to: 27017
Here is my hosts file: https://pastebin.com/Y6Re15iy
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
192.168.10.10 site.test
::1 localhost
Vagrant 6.1.28 seems to restrict the valid IP address of a guest.
Took me hours to figure this out, read then manual.
Solved by changing my Homestead.yaml ip to 192.168.56.0 and also altered the ip in /etc/hosts
From the manual:
On Linux, Mac OS X and Solaris Oracle VM VirtualBox will only allow IP addresses in 192.68.56.0/21 range to be assigned to host-only adapters.
For anyone that encounters this issue I fixed this by updating the VM to a new IP, changing the domain, and clearing the dns cache (dscacheutil –flushcache).

Configured site for Homestead not working

Here is Homestead.yaml config :
ip: 192.168.9.10
memory: 2048
cpus: 2
provider: vmware_desktop
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
-
map: 'C:\Users\Khalil\RestfulAPI'
to: /home/vagrant/restfulapi
sites:
-
map: restfulapi.dev
to: /home/vagrant/restfulapi/public
php: "8.0"
databases:
- homestead
features:
-
mariadb: false
-
ohmyzsh: false
-
webdriver: false
name: restfulapi
hostname: restfulapi
I also added it to the hosts file : 192.168.9.10 restfulapi.dev
It works when i use http://restfulapi.local without mapping it directly as a site since i guess *.local is the default?
Running restfulapi.dev takes forever to load then gives a "connection time out", pinging the IP from my host responds with TTL expired in transit 4 times and ends with a
Ping statistics for 192.168.9.10:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
I can normally run vagrant ssh and ping 192.168.9.10 and get response, and it's also configured as eth1 on the machine when i run ip address show.
Tried provisioning, destroying the VM, setting up everything from scratch.
I attach my Homestead.yaml file for a test project that I am developing using Laravel Framework and PHPStorm.
[Homestead.yaml]
[1]: https://i.stack.imgur.com/VqkHh.png
After making changes to your Homestead.yaml file, execute
vagrant reload --provision
Once the virtual machine has finished booting, ssh into the VM through this command: vagrant ssh
Delete the version of php from your Homestead.yaml and run vagrant up and vagrant provision.

Greenplum 5.11 gpload hangs when invoked on master node

[gpadmin#mdw ssb_gp_scripts]$ cat d_gpload.yaml
—
VERSION: 1.0.0.1
DATABASE: ssb
USER: gpadmin
HOST: mdw
PORT: 5432
GPLOAD:
INPUT:
– SOURCE:
LOCAL_HOSTNAME:
– mdw
PORT: 8080
FILE:
– /ssb/ssb/dimdate.tbl
SSL: false
– FORMAT: csv
– DELIMITER: ‘|’
– HEADER: false
– ENCODING: UNICODE
– ERROR_LIMIT: 100
– LOG_ERRORS: true
EXTERNAL:
– SCHEMA: orders
OUTPUT:
– TABLE: orders.dimdate
– MODE: insert
PRELOAD:
– TRUNCATE: true
– REUSE_TABLES: true
Above is the yaml file on master host.
[gpadmin#mdw ssb_gp_scripts]$ gpload -f d_gpload.yaml -l d_gpload.log .
2018-10-13 13:12:02|INFO|gpload session started 2018-10-13 13:12:02
2018-10-13 13:12:02|INFO|started gpfdist -p 8080 -P 8081 -f “/ssb/ssb/dimdate.tbl” -t 30
2018-10-13 13:12:02|INFO|reusing external table ext_gpload_reusable_179f5634_ced8_11e8_822a_0a78550cb23a
It hangs at this point and never moves.
My cluster is in AWS.
The issue was due to HTTP port 8080 not being open between the nodes.
Opened up this port in AWS Security Group and the issue was resolved.

How to change box name in Chef kitchen.yml?

I have created my kitchen.yml in the following way:
---
driver:
name: vagrant
customize:
memory: 2048
driver_config:
require_chef_omnibus: true
use_vagrant_berkshelf_plugin: true
provisioner:
name: chef_zero
chef_omnibus_url: http://box-url/install.sh
platforms:
- name: prod.abc.com
driver:
box_url: http://abc.box
run_list:
- role[new_role]
suites:
- name: default
In the above kitchen.yml, I get the hostname of the machine as default-prodabccom. However, I want the hostname to be prod.abc.com
What changes should i made in my kitchen.yml to get the correct name?
Hostname of the Guest System
In order to define the hostname of the operating system running inside the VM (cf. /etc/hostname), use the vm_hostname option of the kitchen-vagrant driver:
platforms:
- name: prod.abc.com
driver_config:
vm_hostname: prod.abc.com
Name of the Test-Kitchen Suite/Platform
To rename the suite-platform combination shown in Converging <default-prodabccom>, you can only play with name of suite and platform, i.e., to get production-abccom. This name is computed here in test-kitchen and, e.g., all dots are stripped, which cannot simply be changed.
Nevertheless, if I understand it right that you want to change this name: it makes little sense to me. Don't change that.
Name of the VM in VirtualBox
The name of the VM (e.g. kitchen-default-prodabcom_..default_1234..) is derived here in kitchen-vagrant and cannot easily be changed.
I found this question because I was in the scenario where I am testing a number of kitchen enabled repos, each containing a number of platforms. e.g.
elasticsearch
centos-6
centos-7
java
centos-6
centos-7
and you can give those machines their own ip by virtualbox when they are spun up like so;
driver:
name: vagrant
network:
- ["private_network", { type: "dhcp" }]
This facilitates testing, if something has failed and then you can get to the box directly. And you can use the vagrant HostManager plugin to keep your /etc/hosts updated with the current ip address.
So you can go http://default-centos-74.vagrantup.com in a local browser to check that instance. You can also name your suites in such a way that it leads to unique names for each each, across repos, for example prefixing each like so;
suites:
- name: elasticsearch-default
and in other other .kitchen.yml
suites:
- name: java-default
which still leads to useful naming;
http://elasticsearch-default-centos-74.vagrantup.com
However whats happended recently is that chrome and firefox have started to enforce HSTS which makes trying to get to non-HTTPS local sites, mapped using /etc/hosts a PITA.
The main thing is to get rid of the vagrantup.com suffix. However that is hard coded in, and the only option for over writing it is in .kitchen.yml which is unfortunate, because that doesn't know the suite and platform at the point it generates the Vagrantfile, so it's not much use.
You can use chef/ansible to rename the box, but that is not very nice. The solution I came up with is like this;
you can set a custom Vagrantfile.erb in .kitchen.yml ;
---
driver:
name: vagrant
network:
- ["private_network", { type: "dhcp" }]
vagrantfile_erb: Vagrantfile.erb
Then copy that Vagrantfile.erb out of the gem on your local box into the root of your test-kitchen repo. Mine was at /home/user1/.gem/ruby/gems/kitchen-vagrant-1.3.0/templates/Vagrantfile.erb
And then you set arbitrary names to your boxes by changing it at line 36;
c.vm.hostname = "<%= #instance.name %>.<%=
config[:kitchen_root].split('/')[-1] %>.testbox"
or you can modify it like so, and allow over riding from the .kitchen.yml config
36c36
< c.vm.hostname = "<%= config[:vm_hostname] %>"
---
> c.vm.hostname = "<%= #instance.name %>.<%= config[:var_domain] ? config[:var_domain] : config[:kitchen_root].split('/')[-1] %>.<%= config[:var_suffix] ? config[:var_suffix] : "vagrantup.com" %>"
99d98
<
https://gist.github.com/tolland/fe01eb0f46d26850cc5c98e167578f7b
And then you set arbitrary names to your boxes by setting var_suffix and var_domain in .kitchen.yml
---
driver:
name: vagrant
network:
- ["private_network", { type: "dhcp" }]
vagrantfile_erb: Vagrantfile.erb
#var_domain: sometingsomething
var_suffix: testbox

Resources