During testing network, I'm facing a very low throughput with gigabit Ethernet, which was implemented as RGMII mode, as follows:
$iperf - s
Interval Transfer Bandwidth
0.0-10.0 sec 211 MBytes 176 Mbits/sec
$iperf - c <IP>
Interval Transfer Bandwidth
0.0-10.0 sec 101 MBytes 83.6 Mbits/sec
I'm just a beginner with network on Linux then could anyone guide me some direction to debug and fix this issue?
The only thing I tried so far is that reading ethtool. There is nothing wrong observe here.
Settings for eth1:
Supported ports: [ TP MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: MII
PHYAD: 2
Transceiver: external
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000000 (0)
Link detected: yes
Related
I'm getting the following error on serial output (verbose mode) when my smart device , ESP32 S3 tries to connect over WIFI to the Access point (AP). The SSID is TheScientist
Below is the serial output:
Connecting to TheScientist
[ 1064][D][WiFiGeneric.cpp:929] _eventCallback(): Arduino Event: 0 - WIFI_READY
[ 1098][V][WiFiGeneric.cpp:338] _arduino_event_cb(): STA Started
[ 1099][V][WiFiGeneric.cpp:97] set_esp_interface_ip(): Configuring Station static IP: 0.0.0.0, MASK: 0.0.0.0, GW: 0.0.0.0
[ 1099][D][WiFiGeneric.cpp:929] _eventCallback(): Arduino Event: 2 - STA_START
.......[ 4750][V][WiFiGeneric.cpp:360] _arduino_event_cb(): STA Disconnected: SSID: TheScientist, BSSID: 3c:cd:5d:a7:f1:13, Reason: 15
[ 4750][D][WiFiGeneric.cpp:929] _eventCallback(): Arduino Event: 5 - STA_DISCONNECTED
[ 4758][W][WiFiGeneric.cpp:950] _eventCallback(): Reason: 15 - 4WAY_HANDSHAKE_TIMEOUT
Although the verbose debug serial output shows a
Reason: 15 - 4WAY_HANDSHAKE_TIMEOUT
in reality the timeout is due to usage of a password longer than the 32 char allowed on ESP32 MCUs . In summary, any ESP32 MCU to successfully connect to any WIFI Access Point , the Network password must not exceed 32 char long.
I have installed Grafana 8.4.4. Enterprise in Windows 10 Pro and I am trying to connect to a Timescale database (postgreSQL).
First I tried to connect to Timescale Cloud (https://portal.timescale.cloud/login), which uses Timescale version 2.6.0.
In Timescale Cloud service page I can see these credentials:
Host: <host-string>
Port: 10250
User: tsdbadmin
Password: <timescale-cloud-service-password>
Service URI: postgres://tsdbadmin:<timescale-cloud-service-password>#<host-string>:10250/defaultdb?sslmode=require
SSL mode: require
Allowed IP addresses: 0.0.0.0/0
PostgreSQL version: 14.2
The database name is: periodic-measurements
I have also copied the CA Certificate.
In the Timescale Cloud Data Source in Grafana I have the following configurations:
Host: <host-string>:10250
Database: periodic-measurements
User: tsdbadmin
Password: <timescale-cloud-service-password>
TLS/SSL Mode: require
TLS/SSL Method: Certificate content
CA Cert: <CA Certificate>
Version: 12+
TimescaleDB: enabled
When clicking Save & Test I get:
Query data error
Then, I tried to connect to the Timescale database (postgreSQL version 14.2, Timescale version 2.6.0) contained in my local Linux instance (Linux Ubuntu 18.04 on VirtualBox on top of my Windows 10).
From my local Linux terminal I get into the database as follows:
$ psql -h localhost -p 5432 -U db_owner testdb
by using the password <timescale-local-password>.
In my local Linux instance in /etc/postgresql/14/main/pg_hba.conf I have the following line:
# IPv4 local connections:
# TYPE DATABASE USER ADDRESS METHOD
host all all 0.0.0.0/0 scram-sha-256
In Virtual Box in Settings > Network, I have selected Bridged Adapter. In Linux, ifconfig gives me the following:
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.30.129 netmask 255.255.255.0 broadcast 192.168.30.255
inet6 fe80::c757:3763:9a7d:e2f1 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:d0:6c:9c txqueuelen 1000 (Ethernet)
RX packets 76 bytes 23686 (23.6 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 119 bytes 13263 (13.2 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
In the Timescale Local Data Source in Grafana I have the following configurations:
Host: 192.168.30.129:5432
Database: testdb
User: db_omver
Password: <timescale-local-password>
TLS/SSL Mode: disable
Version: 12+
TimescaleDB: enabled
When clicking Save & Test I get:
Database Connection OK
Note also that, by using my Grafana installation in my Linux local instance, I can successfully connect to Timescale Cloud.
Any hint what could be wrong in the connection to Timescale Cloud from Windows Grafana?
Thanks,
Bernardo
I have a docker container running on overlay network. My requirement is to reach the service running in this container externally from different hosts. The service is bind to container's internal IP address and doing port bind to host is not a solution in this case.
Actual Scenario:
The service running inside container is spark driver configured with yarn-client. The spark driver binds to container internal IP(10.x.x.x). When spark driver communicates with hadoop yarn running on different cluster, the application master on yarn tries to communicate back to spark driver on the driver’s container internal ip but it can’t connect driver on internal IP for obvious reason.
Please let me know if there is a way to achieve the successful communication from application master(yarn) to spark driver(docker container).
Swarm Version: 1.2.5
docker info:
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 42
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 1
ip-172-30-0-175: 172.30.0.175:2375
└ ID: YQ4O:WGSA:TGQL:3U5F:ONL6:YTJ2:TCZJ:UJBN:T5XA:LSGL:BNGA:UGZW
└ Status: Healthy
└ Containers: 3 (2 Running, 0 Paused, 1 Stopped)
└ Reserved CPUs: 0 / 16
└ Reserved Memory: 0 B / 66.06 GiB
└ Labels: kernelversion=3.13.0-91-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
└ UpdatedAt: 2016-09-10T05:01:32Z
└ ServerVersion: 1.12.1
Plugins:
Volume:
Network:
Swarm:
NodeID:
Is Manager: false
Node Address:
Security Options:
Kernel Version: 3.13.0-91-generic
Operating System: linux
Architecture: amd64
CPUs: 16
Total Memory: 66.06 GiB
Name: 945b4af662a4
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Command to run container: I am running it using docker-compose:
zeppelin:
container_name: "${DATARPM_ZEPPELIN_CONTAINER_NAME}"
image: "${DOCKER_REGISTRY}/zeppelin:${DATARPM_ZEPPELIN_TAG}"
network_mode: "${CONTAINER_NETWORK}"
mem_limit: "${DATARPM_ZEPPELIN_MEM_LIMIT}"
env_file: datarpm-etc.env
links:
- "xyz"
- "abc"
environment:
- "VOL1=${VOL1}"
- "constraint:node==${DATARPM_ZEPPELIN_HOST}"
volumes:
- "${VOL1}:${VOL1}:rw"
entrypoint: ["/bin/bash", "-c", '<some command here>']
It seems yarn and spark need to be able to see the each other directly on the network. If you could put them on the same overlay network, everything would be able to communicate directly, if not...
Overlay
It is possible to route data directly into the overlay network on a Docker node via the docker_gwbridge that all overlay containers are connected to but, and it's a big but, that only works if you are on the Docker node where the container is running.
So running 2 containers on a 2 node non swarm mode overlay 10.0.9.0/24 network...
I can ping the local container on demo0 but not the remote on demo1
docker#mhs-demo0:~$ sudo ip ro add 10.0.9.0/24 dev docker_gwbridge
docker#mhs-demo0:~$ ping -c 1 10.0.9.2
PING 10.0.9.2 (10.0.9.2): 56 data bytes
64 bytes from 10.0.9.2: seq=0 ttl=64 time=0.086 ms
docker#mhs-demo0:~$ ping -c 1 10.0.9.3
PING 10.0.9.3 (10.0.9.3): 56 data bytes
^C
--- 10.0.9.3 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
Then on the other host the container are reversed but it's still the local container that is accessable.
docker#mhs-demo1:~$ sudo ip ro add 10.0.9.0/24 dev docker_gwbridge
docker#mhs-demo1:~$ ping 10.0.9.2
PING 10.0.9.2 (10.0.9.2): 56 data bytes
^C
--- 10.0.9.2 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
docker#mhs-demo1:~$ ping 10.0.9.3
PING 10.0.9.3 (10.0.9.3): 56 data bytes
64 bytes from 10.0.9.3: seq=0 ttl=64 time=0.094 ms
64 bytes from 10.0.9.3: seq=1 ttl=64 time=0.068 ms
So the big issue is the network would need to know where containers are running and route packets accordingly. If the network were capable of achieving routing like that, you probably wouldn't need an overlay network in the first place.
Bridge networks
Another possibility is using a plain bridge network on each Docker node with routable IP's. So each bridge has an IP range assigned that your network is aware of and can route to from anywhere.
192.168.9.0/24 10.10.2.0/24
Yarn DockerC
router
10.10.0.0/24 10.10.1.0/24
DockerA DockerB
The would attach a network to each nodes.
DockerA:$ docker network create --subnet 10.10.0.0/24 sparknet
DockerB:$ docker network create --subnet 10.10.1.0/24 sparknet
DockerC:$ docker network create --subnet 192.168.2.0/24 sparknet
Then the router configures routes for 10.10.0.0/24 via DockerA etc.
This is a similar approach to the way Kubernetes does its networking.
Weave Net
Weave is similar to overlay in that it creates a virtual network that transmits data over UDP. It's a bit more of a generalised networking solution though and can integrate with a host network.
I've downloaded the latest (as of October 29th, 2015) tarball, run ./cluster/kube-up.sh and followed the guestbook-go example.
I'm hoping to access the underlying frontend on my OSX host. As a newb to this, I'm having trouble discerning the layers of networking involved between my my host, virtualbox, kubernetes + docker.
The example spawns 3 docker frontends so I'm anticipating I have some load-balanced way of accessing it.
➜ kubernetes kubectl get services 0
NAME LABELS SELECTOR IP(S) PORT(S)
guestbook app=guestbook app=guestbook 10.247.25.102 3000/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.247.0.1 443/TCP
redis-master name=redis-master name=redis-master 10.247.212.56 6379/TCP
redis-slave name=redis-slave name=redis-slave 10.247.224.236 6379/TCP
I'm expecting to be able to visit 10.247.25.102:3000 to see the running application. No luck.
https://10.245.1.2 Does yield some kind of HTTP response. This corresponds to the eth1 interface of the master vm.
➜ kubernetes kubectl cluster-info
Kubernetes master is running at https://10.245.1.2
KubeDNS is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-ui
vagrant ssh minion-1
ifconfig
...
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.245.1.3 netmask 255.255.255.0 broadcast 10.245.1.255
inet6 fe80::a00:27ff:fe9a:e16 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:9a:0e:16 txqueuelen 1000 (Ethernet)
RX packets 8858 bytes 5128595 (4.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10119 bytes 2535751 (2.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
Using Virtualbox 5.0.8, Vagrant 1.7.4,
Have you looked at this debugging section?
In general, pods are not expected to be accessed directly. A service is expected to be created that will proxy requests to pod(s).
Examples include:
KubeUI is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-ui
Updating to Kubernetes 1.0.7 solved the issue. They switched to Flannel for managing to overlay network.
I have problem with network in buildroot on my VirtualMachine. When i typing ifconfig i get answer:
eth0: error fetching interface information: Device not found
While loading buildroot on console i see:
ip: can't find device eth0
ip: SI0CGIFFALGS :No such device
I cant`t find the way resolve this problem.
# make linux-menuconfig
Device drivers —>
Network device support —>
Ethernet driver support—>
Select:
<*> Intel(R) PRO/100+ support
<*> Intel(R) PRO/1000 Gigabit Ethernet support
<*> Intel(R) PRO/1000 PCI-Express Gigabit Ethernet support
<*> Intel(R) 82575/82576 PCI-Express Gigabit Ethernet support
[*] Intel(R) PCI-Express Gigabit adapters HWMON support
<*> Intel(R) 82576 Virtual Function Ethernet support
And should work now with VM.
Your problem is not a Buildroot problem, but a kernel configuration problem.
From the last line in the boot log, you might add Intel(R) PRO/1000 PCI-Express Gigabit Ethernet support as #TadejP mentioned.
[ 0.204512] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
[ 0.205444] e1000: Copyright (c) 1999-2006 Intel Corporation.
[ 0.220165] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ 0.362077] ata1.00: ATA-7: QEMU HARDDISK, 2.5+, max UDMA/100
[ 0.362908] ata1.00: 4280320 sectors, multi 16: LBA48
[ 0.364110] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
[ 0.365246] ata2.00: configured for MWDMA2
[ 0.366176] ata1.00: configured for MWDMA2
[ 0.366846] scsi 0:0:0:0: Direct-Access ATA QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5
[ 0.368118] sd 0:0:0:0: [sda] 4280320 512-byte logical blocks: (2.19 GB/2.04 GiB)
[ 0.369219] sd 0:0:0:0: [sda] Write Protect is off
[ 0.369916] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 0.371218] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 0.372213] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5
[ 0.387384] sda: sda1 sda2 sda3
[ 0.388409] sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
[ 0.389298] cdrom: Uniform CD-ROM driver Revision: 3.20
[ 0.390163] sd 0:0:0:0: [sda] Attached SCSI disk
[ 0.390976] sr 1:0:0:0: Attached scsi generic sg1 type 5
[ 0.548201] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 52:54:00:12:34:56
[ 0.549265] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection