Coalesce function replacing in ClickHouse - clickhouse

I'm beginner in ClickHouse and trying to use for handling statistic of our project. Some desktop software send information to our servers and we need to group operation systems to short list. This example query:
SELECT OS
FROM Req
GROUP BY OS
┌─OS──────────────────────────────────────────────────────────────────────────────┐
│ Майкрософт Windows 10 Корпоративная 2016 с долгосрочным обслуживанием │
│ Майкрософт Ознакомительная версия Windows Server 2012 Standard │
│ Майкрософт Windows 10 Домашняя для одного языка │
│ Microsoft Windows 8.1 Enterprise │
│ Майкрософт Windows 8 Корпоративная Прогрессивная │
│ Microsoft Windows Server 2008 R2 Standard │
│ Microsoft Windows 8.1 mit Bing │
│ Microsoft Windows 10 Home │
│ Microsoft Windows 8 Enterprise N │
│ Майкрософт Windows 8.1 Профессиональная │
│ Майкрософт Windows 8 Профессиональная │
│ Microsoft Windows 7 Rеактивная │
│ Microsoft Windows 10 Pro Insider Preview │
need to be aggregate to clean list:
8 xxx
8.1 yyy
2008 zzz
2008 R2 aaa
and so on. I'm not found COALESCE function and try to using extract for identify OS by version numbers:
select extract(OS, ' 7 ') || extract(OS, ' 8.1 ') || extract(OS, ' 10 ') || extract(OS, ' 2008 R2 ') || extract (OS, ' 2008 ') || extract(OS, ' 2012 R2 ') || extract(OS, ' 2012 ') as Value, count(distinct SID) from Req group by Value limit 100000;
But! Because Windows 2008 and Windows 2008 R2 have '2008' in version string, i'm received this result:
┌─Value───────────┬─uniqExact(SID)─┐
│ │ 224 │
│ 2012 │ 17 │
│ 10 │ 1315 │
│ 7 │ 4282 │
│ 2008 │ 20 │
│ 2012 R2 2012 │ 57 │
│ 2008 R2 2008 │ 136 │
│ 8.1 │ 754 │
└─────────────────┴────────────────┘
What function I need to be use in my case? Thanks.

What you need here is a multif.
If the string "2012 R2" is found, return me that, if "2012" return me that... etc.
so in your case you could do something like this:
multiIf(like(OS, '% 2008 R2 %'), extract(OS, ' 2008 R2 ') , like(OS, '% 2008 %'), extract (OS, ' 2008 '), 'OS_not_found') as Value
This is basically an if else if and you can add as many values as you like, I just used the two because I didn't want to write too much, but in your case just add all the OS values you need. Its a bit verbose but it gets the job done.
The function:
like(OS, '% 2008 R2 %')
returns true if the string is found and false otherwise, the "%" is the regex wildcard in clickhouse. Since the multif stops at the first match you will not get two extracted string in the same value.

Found!
select OS, arrayFirst(x -> cast(position(OS, x) as UInt8), [' 8 ',' 8.1 ', '2008 R2', '2008'])
from Req
limit 1000;
(without CAST i'm receive exeption: DB::Exception: Unexpected type of filter column., strange...)

Related

Mount propagation of virtual kernel filesystems and read-only option

I'm trying to mount /sys and /sys/kernel/security as read-only on custom location with slave mount propagation. With kernel below 4.15 read-only option propagates to the original mount /sys/kernel/security. With newer kernel version it doesn't propagate, so it looks like there was a bug in kernel fixed, but it doesn't propagate even if the custom mountpoint (here /mnt/root/sys) is shared. How to explain this?
Steps to reproduce:
# mkdir -p /mnt/host_root/sys
# mount -o ro -t sysfs sys /mnt/host_root/sys
# mount --make-rslave /mnt/host_root/sys
# mount -o ro -t securityfs securityfs /mnt/host_root/sys/kernel/security
Result on CentOS 7 (kernel 3.10) or Ubuntu 18.04 LTS (kernel 4.15):
$ findmnt -o TARGET,PROPAGATION,OPTIONS
TARGET PROPAGATION OPTIONS
/ shared rw,relatime,errors=remount-ro,data=ordered
├─/sys shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security shared ro,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup shared ro,nosuid,nodev,noexec,mode=755
│ │ ├─/sys/fs/cgroup/unified shared rw,nosuid,nodev,noexec,relatime,nsdelegate
│ │ ├─/sys/fs/cgroup/systemd shared rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
│ │ ├─/sys/fs/cgroup/devices shared rw,nosuid,nodev,noexec,relatime,devices
│ │ ├─/sys/fs/cgroup/memory shared rw,nosuid,nodev,noexec,relatime,memory
│ │ ├─/sys/fs/cgroup/rdma shared rw,nosuid,nodev,noexec,relatime,rdma
│ │ ├─/sys/fs/cgroup/net_cls,net_prio shared rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
│ │ ├─/sys/fs/cgroup/hugetlb shared rw,nosuid,nodev,noexec,relatime,hugetlb
│ │ ├─/sys/fs/cgroup/cpu,cpuacct shared rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
│ │ ├─/sys/fs/cgroup/blkio shared rw,nosuid,nodev,noexec,relatime,blkio
│ │ ├─/sys/fs/cgroup/freezer shared rw,nosuid,nodev,noexec,relatime,freezer
│ │ ├─/sys/fs/cgroup/cpuset shared rw,nosuid,nodev,noexec,relatime,cpuset
│ │ ├─/sys/fs/cgroup/perf_event shared rw,nosuid,nodev,noexec,relatime,perf_event
│ │ └─/sys/fs/cgroup/pids shared rw,nosuid,nodev,noexec,relatime,pids
│ ├─/sys/fs/pstore shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/debug shared rw,relatime
│ ├─/sys/fs/fuse/connections shared rw,relatime
│ └─/sys/kernel/config shared rw,relatime
├─/proc shared rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc shared rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12645
├─/dev shared rw,nosuid,relatime,size=1996564k,nr_inodes=499141,mode=755
│ ├─/dev/pts shared rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm shared rw,nosuid,nodev
│ ├─/dev/hugepages shared rw,relatime,pagesize=2M
│ └─/dev/mqueue shared rw,relatime
├─/run shared rw,nosuid,noexec,relatime,size=403944k,mode=755
│ ├─/run/lock shared rw,nosuid,nodev,noexec,relatime,size=5120k
│ ├─/run/user/1000 shared rw,nosuid,nodev,relatime,size=403940k,mode=700,uid=1000,gid=1000
│ └─/run/docker/netns/default shared rw
├─/boot shared rw,relatime,block_validity,barrier,user_xattr,acl
├─/tmp shared rw,relatime,size=1048576k
└─/mnt/root/sys private ro,relatime
└─/mnt/root/sys/kernel/security private ro,relatime
Result on Ubuntu 22.04 LTS (kernel 5.15), mount --make-rslave /mnt/host_root/sys is skipped
$ findmnt -o TARGET,PROPAGATION,OPTIONS
TARGET PROPAGATION OPTIONS
/ shared rw,relatime
├─/sys shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup shared rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot
│ ├─/sys/fs/pstore shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf shared rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/kernel/debug shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/tracing shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/fuse/connections shared rw,nosuid,nodev,noexec,relatime
│ └─/sys/kernel/config shared rw,nosuid,nodev,noexec,relatime
├─/proc shared rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc shared rw,relatime,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=24581
│ └─/proc/sys/fs/binfmt_misc shared rw,nosuid,nodev,noexec,relatime
├─/dev shared rw,nosuid,relatime,size=1951612k,nr_inodes=487903,mode=755,inode64
│ ├─/dev/pts shared rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm shared rw,nosuid,nodev,inode64
│ ├─/dev/hugepages shared rw,relatime,pagesize=2M
│ └─/dev/mqueue shared rw,nosuid,nodev,noexec,relatime
├─/run shared rw,nosuid,nodev,noexec,relatime,size=401680k,mode=755,inode64
│ ├─/run/lock shared rw,nosuid,nodev,noexec,relatime,size=5120k,inode64
│ ├─/run/user/1000 shared rw,nosuid,nodev,relatime,size=401676k,nr_inodes=100419,mode=700,uid=1000,gid=1000,inode64
│ ├─/run/credentials/systemd-sysusers.service shared ro,nosuid,nodev,noexec,relatime,mode=700
│ └─/run/docker/netns/default shared rw
├─/boot shared rw,relatime
└─/mnt/root/sys shared ro,relatime
└─/mnt/root/sys/kernel/security shared ro,relatime

is there a better way to query system tables across a clickhouse cluster?

We have a modest clickhouse cluster, ~30 nodes, and want to collect usage stats on it. We are hoping to do this using scheduled queries against the system tables, but using a normal query only get information on the one node you happen to be connected to, and creating a distributed table only works with the *log system tables. We can loop over the nodes, but don't want to do that. Is there a way to get all the instances of a system table, such as system.parts, in one query?
Distributed tables works with any type of tables and clusterAllReplicas as well.
create table test on cluster replicated as system.processes Engine=Distributed(replicated, system, processes);
SELECT
FQDN(),
elapsed
FROM test
┌─FQDN()────────────────────┬────elapsed─┐
│ hos.mycmdb.net │ 0.00063795 │
└───────────────────────────┴────────────┘
SELECT
FQDN(),
elapsed
FROM clusterAllReplicas(replicated, system, sessions);
SELECT elapsed
FROM clusterAllReplicas(replicated, system, processes)
┌─────elapsed─┐
│ 0.005636027 │
└─────────────┘
┌─────elapsed─┐
│ 0.000228303 │
└─────────────┘
┌─────elapsed─┐
│ 0.000275745 │
└─────────────┘
┌─────elapsed─┐
│ 0.000311621 │
└─────────────┘
┌─────elapsed─┐
│ 0.000270791 │
└─────────────┘
┌─────elapsed─┐
│ 0.000288045 │
└─────────────┘
┌─────elapsed─┐
│ 0.001048277 │
└─────────────┘
┌─────elapsed─┐
│ 0.000256203 │
└─────────────┘
It can be used remote or remoteSecure functions that support multiple addresses:
SELECT
hostName() AS host,
any(partition),
count()
FROM remote('node{01..30}-west.contoso.com', system, parts)
GROUP BY host
/*
┌─host──────────┬─any(partition)─┬─count()─┐
│ node01-west │ 202012 │ 733 │
..
│ node30-west │ 202012 │ 687 │
└───────────────┴────────────────┴─────────┘
*/
For the record, we ended up using materialized views:
CREATE MATERIALIZED VIEW _tmp.parts on cluster main_cluster
engine = Distributed('main_cluster', 'system', 'parts', rand())
AS select * from system.parts

Could this leak be caused by the Android 11 DP2

I'm using the Android 11 DP2 on a Pixel 4 XL, and since then, I get this leak a lot. I suspect that it's caused by the developer preview, but I'm not entirely sure.
I tried to search for this leak online, but I didn't find anything related.
What do you think?
┬───
│ GC Root: System class
│
├─ android.app.ApplicationPackageManager class
│ Leaking: NO (a class is never leaking)
│ ↓ static ApplicationPackageManager.mHasSystemFeatureCache
│ ~~~~~~~~~~~~~~~~~~~~~~
├─ android.app.ApplicationPackageManager$1 instance
│ Leaking: UNKNOWN
│ Anonymous subclass of android.app.PropertyInvalidatedCache
│ ↓ ApplicationPackageManager$1.mCache
│ ~~~~~~
├─ android.app.PropertyInvalidatedCache$1 instance
│ Leaking: UNKNOWN
│ Anonymous subclass of java.util.LinkedHashMap
│ ↓ PropertyInvalidatedCache$1.tail
│ ~~~~
├─ java.util.LinkedHashMap$LinkedHashMapEntry instance
│ Leaking: UNKNOWN
│ ↓ LinkedHashMap$LinkedHashMapEntry.key
│ ~~~
├─ android.app.ApplicationPackageManager$HasSystemFeatureQuery instance
│ Leaking: UNKNOWN
│ ↓ ApplicationPackageManager$HasSystemFeatureQuery.this$0
│ ~~~~~~
├─ android.app.ApplicationPackageManager instance
│ Leaking: UNKNOWN
│ ↓ ApplicationPackageManager.mContext
│ ~~~~~~~~
├─ android.app.ContextImpl instance
│ Leaking: UNKNOWN
│ ↓ ContextImpl.mAutofillClient
│ ~~~~~~~~~~~~~~~
╰→ com.example.app.ui.activities.SplashActivity instance
​ Leaking: YES (ObjectWatcher was watching this because com.example.app.ui.activities.SplashActivity received Activity#onDestroy() callback and Activity#mDestroyed is true)
​ key = 6a69a2a3-1d38-4d27-8c4c-cae915bea1b1
​ watchDurationMillis = 15093
​ retainedDurationMillis = 10089
METADATA
Build.VERSION.SDK_INT: 29
Build.MANUFACTURER: Google
LeakCanary version: 2.2
App process name: com.example.app
Analysis duration: 4326 ms```
Yes, this is very likely to be an Android leak. No idea if it's new, but I haven't seen it before. Do you do anything special with auto fill?
You should report it on the Android bug tracker, ideally with a sample project to reproduce it. If you can't reproduce easily, at least providing a link to a heap dump would help investigating.
Based on the names involved in the leaktrace, if ApplicationPackageManager has an application scope (and therefore is not leaking) then ContextImpl.mAutofillClient is holding on to an activity reference for too long.
The field is defined here: https://android.googlesource.com/platform/frameworks/base/+blame/master/core/java/android/app/ContextImpl.java#235
I haven't found any recent changes in autofill that would explain this leak. We can see in the source code of Activity that when an activity gets its base context attached, it sets itself as the autofill client for that base context: https://android.googlesource.com/platform/frameworks/base/+blame/master/core/java/android/app/Activity.java#1124
It never unsets itself, so either that's the mistake, or the base context is expected to have the same scope as the activity.
Another thing that I find weird though is static ApplicationPackageManager.mHasSystemFeatureCache which means that ApplicationPackageManager has a static fields that starts with m (member field). That's a weird name, usually a mistake that doesn't happen in the android sources. And indeed I can't find it: https://android.googlesource.com/platform/frameworks/base/+blame/master/core/java/android/app/ApplicationPackageManager.java but maybe they haven't shared the updated sources yet? What device are you using this on?

Opendaylight flourine creating reactive flows

I was trying to set up the ODL with mininet using the latest version of ODL which I downloaded from their site. It seems that they have removed support for DELUX but I am using NextUI for it. But I am not able to install the feature odl-l2switch-switch or the odl-l2switch-switch-ui. It seems to be no longer available. Is there any other way to enable the reactive flow discovery feature in the ODL. I have been trying to find a solution for a while now.
opendaylight-user#root>feature:install odl-l2switch-switch
Error executing command: No matching features for odl-l2switch-switch/0.0.0
opendaylight-user#root>feature:list |grep l2switch
I was not able to get a reactive flow entry after connecting to the controller.
mininet#mininet-vm:~$ sudo mn --topo=tree,2,2 --controller=remote,ip=10.5.1.3,port=6633 --switch=ovsk,pr
otocol=OpenFlow13
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3 h4
*** Adding switches:
s1 s2 s3
*** Adding links:
(s1, s2) (s1, s3) (s2, h1) (s2, h2) (s3, h3) (s3, h4)
*** Configuring hosts
h1 h2 h3 h4
*** Starting controller
c0
*** Starting 3 switches
s1 s2 s3 ...
*** Starting CLI:
mininet> links
s1-eth1<->s2-eth3 (OK OK)
s1-eth2<->s3-eth3 (OK OK)
s2-eth1<->h1-eth0 (OK OK)
s2-eth2<->h2-eth0 (OK OK)
s3-eth1<->h3-eth0 (OK OK)
s3-eth2<->h4-eth0 (OK OK)
mininet> pingall
*** Ping: testing ping reachability
h1 -> X X X
h2 -> X X X
h3 -> X X X
h4 -> X X X
*** Results: 100% dropped (0/12 received)
mininet> dump
<Host h1: h1-eth0:10.0.0.1 pid=29639>
<Host h2: h2-eth0:10.0.0.2 pid=29641>
<Host h3: h3-eth0:10.0.0.3 pid=29643>
<Host h4: h4-eth0:10.0.0.4 pid=29645>
<OVSSwitch{'protocol': 'OpenFlow13'} s1: lo:127.0.0.1,s1-eth1:None,s1-eth2:None pid=29650>
<OVSSwitch{'protocol': 'OpenFlow13'} s2: lo:127.0.0.1,s2-eth1:None,s2-eth2:None,s2-eth3:None pid=29653>
<OVSSwitch{'protocol': 'OpenFlow13'} s3: lo:127.0.0.1,s3-eth1:None,s3-eth2:None,s3-eth3:None pid=29656>
<RemoteController{'ip': '10.5.1.3', 'port': 6633} c0: 10.5.1.3:6633 pid=29633>
The flow table in the ovs switch seems to be empty.
mininet#mininet-vm:~$ sudo ovs-ofctl dump-ports s1
OFPST_PORT reply (xid=0x2): 3 ports
port 1: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
port 2: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
mininet#mininet-vm:~$ sudo ovs-ofctl dump-flows s1
NXST_FLOW reply (xid=0x4):
I have installed the features that looks like they are responsible for this, without any success.
odl-openflowplugin-flow-services │ 0.7.0 │ │ Started │ odl-openflowplugin-flow-services │ OpenDaylight :: Openflow Plugin :: Flow Services
features-openflowplugin │ 0.7.0 │ x │ Started │ features-openflowplugin │ features-openflowplugin
odl-openflowplugin-app-lldp-speaker │ 0.7.0 │ x │ Started │ odl-openflowplugin-app-lldp-speaker │ OpenDaylight :: Openflow Plugin :: Application -
odl-openflowplugin-app-topology │ 0.7.0 │ │ Started │ odl-openflowplugin-app-topology │ OpenDaylight :: Openflow Plugin :: Application -
odl-openflowplugin-app-topology-lldp-discovery
please point me in the right direction. All the online questions and answers seems to be aimed at the earlier versions of the ODL Controller. This being a highly used controller, there should be a way to achive this.

julia MethodError: no method matching length(::WinRPM.RPMVersionNumber)

got this error running Julia v1.0.0 x64 upon win10 x64.
Because of this I am unable to use any graphical libraries.
Error: Error building Gtk:
│ [ Info: Multiple package candidates found for mingw64(libjpeg-8.dll), picking newest.
│ ERROR: LoadError: MethodError: no method matching length(::WinRPM.RPMVersionNumber)
│ Closest candidates are:
│ length(!Matched::Core.SimpleVector) at essentials.jl:571
│ length(!Matched::Base.MethodList) at reflection.jl:728
│ length(!Matched::Core.MethodTable) at reflection.jl:802
It not working currently. All you can do is to vote on these GitHub issues:
https://github.com/JuliaGraphics/Gtk.jl/issues/369
https://github.com/JuliaPackaging/WinRPM.jl/issues/163
Good plotting alternative is Plots.jl with GR backend or PyPlot.jl (PyPlot.jl can be also used with Plots.jl for standarized API).

Resources