I'm trying to mount /sys and /sys/kernel/security as read-only on custom location with slave mount propagation. With kernel below 4.15 read-only option propagates to the original mount /sys/kernel/security. With newer kernel version it doesn't propagate, so it looks like there was a bug in kernel fixed, but it doesn't propagate even if the custom mountpoint (here /mnt/root/sys) is shared. How to explain this?
Steps to reproduce:
# mkdir -p /mnt/host_root/sys
# mount -o ro -t sysfs sys /mnt/host_root/sys
# mount --make-rslave /mnt/host_root/sys
# mount -o ro -t securityfs securityfs /mnt/host_root/sys/kernel/security
Result on CentOS 7 (kernel 3.10) or Ubuntu 18.04 LTS (kernel 4.15):
$ findmnt -o TARGET,PROPAGATION,OPTIONS
TARGET PROPAGATION OPTIONS
/ shared rw,relatime,errors=remount-ro,data=ordered
├─/sys shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security shared ro,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup shared ro,nosuid,nodev,noexec,mode=755
│ │ ├─/sys/fs/cgroup/unified shared rw,nosuid,nodev,noexec,relatime,nsdelegate
│ │ ├─/sys/fs/cgroup/systemd shared rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
│ │ ├─/sys/fs/cgroup/devices shared rw,nosuid,nodev,noexec,relatime,devices
│ │ ├─/sys/fs/cgroup/memory shared rw,nosuid,nodev,noexec,relatime,memory
│ │ ├─/sys/fs/cgroup/rdma shared rw,nosuid,nodev,noexec,relatime,rdma
│ │ ├─/sys/fs/cgroup/net_cls,net_prio shared rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
│ │ ├─/sys/fs/cgroup/hugetlb shared rw,nosuid,nodev,noexec,relatime,hugetlb
│ │ ├─/sys/fs/cgroup/cpu,cpuacct shared rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
│ │ ├─/sys/fs/cgroup/blkio shared rw,nosuid,nodev,noexec,relatime,blkio
│ │ ├─/sys/fs/cgroup/freezer shared rw,nosuid,nodev,noexec,relatime,freezer
│ │ ├─/sys/fs/cgroup/cpuset shared rw,nosuid,nodev,noexec,relatime,cpuset
│ │ ├─/sys/fs/cgroup/perf_event shared rw,nosuid,nodev,noexec,relatime,perf_event
│ │ └─/sys/fs/cgroup/pids shared rw,nosuid,nodev,noexec,relatime,pids
│ ├─/sys/fs/pstore shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/debug shared rw,relatime
│ ├─/sys/fs/fuse/connections shared rw,relatime
│ └─/sys/kernel/config shared rw,relatime
├─/proc shared rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc shared rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12645
├─/dev shared rw,nosuid,relatime,size=1996564k,nr_inodes=499141,mode=755
│ ├─/dev/pts shared rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm shared rw,nosuid,nodev
│ ├─/dev/hugepages shared rw,relatime,pagesize=2M
│ └─/dev/mqueue shared rw,relatime
├─/run shared rw,nosuid,noexec,relatime,size=403944k,mode=755
│ ├─/run/lock shared rw,nosuid,nodev,noexec,relatime,size=5120k
│ ├─/run/user/1000 shared rw,nosuid,nodev,relatime,size=403940k,mode=700,uid=1000,gid=1000
│ └─/run/docker/netns/default shared rw
├─/boot shared rw,relatime,block_validity,barrier,user_xattr,acl
├─/tmp shared rw,relatime,size=1048576k
└─/mnt/root/sys private ro,relatime
└─/mnt/root/sys/kernel/security private ro,relatime
Result on Ubuntu 22.04 LTS (kernel 5.15), mount --make-rslave /mnt/host_root/sys is skipped
$ findmnt -o TARGET,PROPAGATION,OPTIONS
TARGET PROPAGATION OPTIONS
/ shared rw,relatime
├─/sys shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup shared rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot
│ ├─/sys/fs/pstore shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf shared rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/kernel/debug shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/tracing shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/fuse/connections shared rw,nosuid,nodev,noexec,relatime
│ └─/sys/kernel/config shared rw,nosuid,nodev,noexec,relatime
├─/proc shared rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc shared rw,relatime,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=24581
│ └─/proc/sys/fs/binfmt_misc shared rw,nosuid,nodev,noexec,relatime
├─/dev shared rw,nosuid,relatime,size=1951612k,nr_inodes=487903,mode=755,inode64
│ ├─/dev/pts shared rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm shared rw,nosuid,nodev,inode64
│ ├─/dev/hugepages shared rw,relatime,pagesize=2M
│ └─/dev/mqueue shared rw,nosuid,nodev,noexec,relatime
├─/run shared rw,nosuid,nodev,noexec,relatime,size=401680k,mode=755,inode64
│ ├─/run/lock shared rw,nosuid,nodev,noexec,relatime,size=5120k,inode64
│ ├─/run/user/1000 shared rw,nosuid,nodev,relatime,size=401676k,nr_inodes=100419,mode=700,uid=1000,gid=1000,inode64
│ ├─/run/credentials/systemd-sysusers.service shared ro,nosuid,nodev,noexec,relatime,mode=700
│ └─/run/docker/netns/default shared rw
├─/boot shared rw,relatime
└─/mnt/root/sys shared ro,relatime
└─/mnt/root/sys/kernel/security shared ro,relatime
We have a modest clickhouse cluster, ~30 nodes, and want to collect usage stats on it. We are hoping to do this using scheduled queries against the system tables, but using a normal query only get information on the one node you happen to be connected to, and creating a distributed table only works with the *log system tables. We can loop over the nodes, but don't want to do that. Is there a way to get all the instances of a system table, such as system.parts, in one query?
Distributed tables works with any type of tables and clusterAllReplicas as well.
create table test on cluster replicated as system.processes Engine=Distributed(replicated, system, processes);
SELECT
FQDN(),
elapsed
FROM test
┌─FQDN()────────────────────┬────elapsed─┐
│ hos.mycmdb.net │ 0.00063795 │
└───────────────────────────┴────────────┘
SELECT
FQDN(),
elapsed
FROM clusterAllReplicas(replicated, system, sessions);
SELECT elapsed
FROM clusterAllReplicas(replicated, system, processes)
┌─────elapsed─┐
│ 0.005636027 │
└─────────────┘
┌─────elapsed─┐
│ 0.000228303 │
└─────────────┘
┌─────elapsed─┐
│ 0.000275745 │
└─────────────┘
┌─────elapsed─┐
│ 0.000311621 │
└─────────────┘
┌─────elapsed─┐
│ 0.000270791 │
└─────────────┘
┌─────elapsed─┐
│ 0.000288045 │
└─────────────┘
┌─────elapsed─┐
│ 0.001048277 │
└─────────────┘
┌─────elapsed─┐
│ 0.000256203 │
└─────────────┘
It can be used remote or remoteSecure functions that support multiple addresses:
SELECT
hostName() AS host,
any(partition),
count()
FROM remote('node{01..30}-west.contoso.com', system, parts)
GROUP BY host
/*
┌─host──────────┬─any(partition)─┬─count()─┐
│ node01-west │ 202012 │ 733 │
..
│ node30-west │ 202012 │ 687 │
└───────────────┴────────────────┴─────────┘
*/
For the record, we ended up using materialized views:
CREATE MATERIALIZED VIEW _tmp.parts on cluster main_cluster
engine = Distributed('main_cluster', 'system', 'parts', rand())
AS select * from system.parts
I was trying to set up the ODL with mininet using the latest version of ODL which I downloaded from their site. It seems that they have removed support for DELUX but I am using NextUI for it. But I am not able to install the feature odl-l2switch-switch or the odl-l2switch-switch-ui. It seems to be no longer available. Is there any other way to enable the reactive flow discovery feature in the ODL. I have been trying to find a solution for a while now.
opendaylight-user#root>feature:install odl-l2switch-switch
Error executing command: No matching features for odl-l2switch-switch/0.0.0
opendaylight-user#root>feature:list |grep l2switch
I was not able to get a reactive flow entry after connecting to the controller.
mininet#mininet-vm:~$ sudo mn --topo=tree,2,2 --controller=remote,ip=10.5.1.3,port=6633 --switch=ovsk,pr
otocol=OpenFlow13
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3 h4
*** Adding switches:
s1 s2 s3
*** Adding links:
(s1, s2) (s1, s3) (s2, h1) (s2, h2) (s3, h3) (s3, h4)
*** Configuring hosts
h1 h2 h3 h4
*** Starting controller
c0
*** Starting 3 switches
s1 s2 s3 ...
*** Starting CLI:
mininet> links
s1-eth1<->s2-eth3 (OK OK)
s1-eth2<->s3-eth3 (OK OK)
s2-eth1<->h1-eth0 (OK OK)
s2-eth2<->h2-eth0 (OK OK)
s3-eth1<->h3-eth0 (OK OK)
s3-eth2<->h4-eth0 (OK OK)
mininet> pingall
*** Ping: testing ping reachability
h1 -> X X X
h2 -> X X X
h3 -> X X X
h4 -> X X X
*** Results: 100% dropped (0/12 received)
mininet> dump
<Host h1: h1-eth0:10.0.0.1 pid=29639>
<Host h2: h2-eth0:10.0.0.2 pid=29641>
<Host h3: h3-eth0:10.0.0.3 pid=29643>
<Host h4: h4-eth0:10.0.0.4 pid=29645>
<OVSSwitch{'protocol': 'OpenFlow13'} s1: lo:127.0.0.1,s1-eth1:None,s1-eth2:None pid=29650>
<OVSSwitch{'protocol': 'OpenFlow13'} s2: lo:127.0.0.1,s2-eth1:None,s2-eth2:None,s2-eth3:None pid=29653>
<OVSSwitch{'protocol': 'OpenFlow13'} s3: lo:127.0.0.1,s3-eth1:None,s3-eth2:None,s3-eth3:None pid=29656>
<RemoteController{'ip': '10.5.1.3', 'port': 6633} c0: 10.5.1.3:6633 pid=29633>
The flow table in the ovs switch seems to be empty.
mininet#mininet-vm:~$ sudo ovs-ofctl dump-ports s1
OFPST_PORT reply (xid=0x2): 3 ports
port 1: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
port 2: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
mininet#mininet-vm:~$ sudo ovs-ofctl dump-flows s1
NXST_FLOW reply (xid=0x4):
I have installed the features that looks like they are responsible for this, without any success.
odl-openflowplugin-flow-services │ 0.7.0 │ │ Started │ odl-openflowplugin-flow-services │ OpenDaylight :: Openflow Plugin :: Flow Services
features-openflowplugin │ 0.7.0 │ x │ Started │ features-openflowplugin │ features-openflowplugin
odl-openflowplugin-app-lldp-speaker │ 0.7.0 │ x │ Started │ odl-openflowplugin-app-lldp-speaker │ OpenDaylight :: Openflow Plugin :: Application -
odl-openflowplugin-app-topology │ 0.7.0 │ │ Started │ odl-openflowplugin-app-topology │ OpenDaylight :: Openflow Plugin :: Application -
odl-openflowplugin-app-topology-lldp-discovery
please point me in the right direction. All the online questions and answers seems to be aimed at the earlier versions of the ODL Controller. This being a highly used controller, there should be a way to achive this.