Mount propagation of virtual kernel filesystems and read-only option - linux-kernel
I'm trying to mount /sys and /sys/kernel/security as read-only on custom location with slave mount propagation. With kernel below 4.15 read-only option propagates to the original mount /sys/kernel/security. With newer kernel version it doesn't propagate, so it looks like there was a bug in kernel fixed, but it doesn't propagate even if the custom mountpoint (here /mnt/root/sys) is shared. How to explain this?
Steps to reproduce:
# mkdir -p /mnt/host_root/sys
# mount -o ro -t sysfs sys /mnt/host_root/sys
# mount --make-rslave /mnt/host_root/sys
# mount -o ro -t securityfs securityfs /mnt/host_root/sys/kernel/security
Result on CentOS 7 (kernel 3.10) or Ubuntu 18.04 LTS (kernel 4.15):
$ findmnt -o TARGET,PROPAGATION,OPTIONS
TARGET PROPAGATION OPTIONS
/ shared rw,relatime,errors=remount-ro,data=ordered
├─/sys shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security shared ro,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup shared ro,nosuid,nodev,noexec,mode=755
│ │ ├─/sys/fs/cgroup/unified shared rw,nosuid,nodev,noexec,relatime,nsdelegate
│ │ ├─/sys/fs/cgroup/systemd shared rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
│ │ ├─/sys/fs/cgroup/devices shared rw,nosuid,nodev,noexec,relatime,devices
│ │ ├─/sys/fs/cgroup/memory shared rw,nosuid,nodev,noexec,relatime,memory
│ │ ├─/sys/fs/cgroup/rdma shared rw,nosuid,nodev,noexec,relatime,rdma
│ │ ├─/sys/fs/cgroup/net_cls,net_prio shared rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
│ │ ├─/sys/fs/cgroup/hugetlb shared rw,nosuid,nodev,noexec,relatime,hugetlb
│ │ ├─/sys/fs/cgroup/cpu,cpuacct shared rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
│ │ ├─/sys/fs/cgroup/blkio shared rw,nosuid,nodev,noexec,relatime,blkio
│ │ ├─/sys/fs/cgroup/freezer shared rw,nosuid,nodev,noexec,relatime,freezer
│ │ ├─/sys/fs/cgroup/cpuset shared rw,nosuid,nodev,noexec,relatime,cpuset
│ │ ├─/sys/fs/cgroup/perf_event shared rw,nosuid,nodev,noexec,relatime,perf_event
│ │ └─/sys/fs/cgroup/pids shared rw,nosuid,nodev,noexec,relatime,pids
│ ├─/sys/fs/pstore shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/debug shared rw,relatime
│ ├─/sys/fs/fuse/connections shared rw,relatime
│ └─/sys/kernel/config shared rw,relatime
├─/proc shared rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc shared rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12645
├─/dev shared rw,nosuid,relatime,size=1996564k,nr_inodes=499141,mode=755
│ ├─/dev/pts shared rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm shared rw,nosuid,nodev
│ ├─/dev/hugepages shared rw,relatime,pagesize=2M
│ └─/dev/mqueue shared rw,relatime
├─/run shared rw,nosuid,noexec,relatime,size=403944k,mode=755
│ ├─/run/lock shared rw,nosuid,nodev,noexec,relatime,size=5120k
│ ├─/run/user/1000 shared rw,nosuid,nodev,relatime,size=403940k,mode=700,uid=1000,gid=1000
│ └─/run/docker/netns/default shared rw
├─/boot shared rw,relatime,block_validity,barrier,user_xattr,acl
├─/tmp shared rw,relatime,size=1048576k
└─/mnt/root/sys private ro,relatime
└─/mnt/root/sys/kernel/security private ro,relatime
Result on Ubuntu 22.04 LTS (kernel 5.15), mount --make-rslave /mnt/host_root/sys is skipped
$ findmnt -o TARGET,PROPAGATION,OPTIONS
TARGET PROPAGATION OPTIONS
/ shared rw,relatime
├─/sys shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup shared rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot
│ ├─/sys/fs/pstore shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf shared rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/kernel/debug shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/tracing shared rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/fuse/connections shared rw,nosuid,nodev,noexec,relatime
│ └─/sys/kernel/config shared rw,nosuid,nodev,noexec,relatime
├─/proc shared rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc shared rw,relatime,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=24581
│ └─/proc/sys/fs/binfmt_misc shared rw,nosuid,nodev,noexec,relatime
├─/dev shared rw,nosuid,relatime,size=1951612k,nr_inodes=487903,mode=755,inode64
│ ├─/dev/pts shared rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm shared rw,nosuid,nodev,inode64
│ ├─/dev/hugepages shared rw,relatime,pagesize=2M
│ └─/dev/mqueue shared rw,nosuid,nodev,noexec,relatime
├─/run shared rw,nosuid,nodev,noexec,relatime,size=401680k,mode=755,inode64
│ ├─/run/lock shared rw,nosuid,nodev,noexec,relatime,size=5120k,inode64
│ ├─/run/user/1000 shared rw,nosuid,nodev,relatime,size=401676k,nr_inodes=100419,mode=700,uid=1000,gid=1000,inode64
│ ├─/run/credentials/systemd-sysusers.service shared ro,nosuid,nodev,noexec,relatime,mode=700
│ └─/run/docker/netns/default shared rw
├─/boot shared rw,relatime
└─/mnt/root/sys shared ro,relatime
└─/mnt/root/sys/kernel/security shared ro,relatime
Related
Fresh Nuxtjs install with yarn doesn't work
I simply create a new nuxt app using (following the documentation): yarn create nuxt-app appname Then I switch in to the directory and run (following the documentation): yarn dev And get the following error: ✖ Nuxt Fatal Error │ │ │ │ Error: #nuxt/components tried to access consola (a peer dependency) but it isn't provided by its ancestors; this makes the require call ambiguous and unsound. │ │ │ │ Required package: consola │ │ Required by: #nuxt/components#virtual:4eff43f3a560be194e6097f872c9d8ed0666b260b73fc7573cd09c38d17c2e2d252932f70636f4f4a6035fee79be6a699b4f9693d4a20ba4eb5c9b359f2413a9#npm:2.1.8 (via │ │ /home/ev/Documents/Projects/firebase-test/f-test/.yarn/__virtual__/#nuxt-components-virtual-7192147023/0/cache/#nuxt-components-npm-2.1.8-189d4bc3ff-b2ab70da20.zip/node_modules/#nuxt/components/dist/) │ │ │ │ Ancestor breaking the chain: nuxt#npm:2.15.7 │ │ │ │ │ │ Require stack: │ │ - │ │ /home/ev/Documents/Projects/firebase-test/f-test/.yarn/__virtual__/#nuxt-components-virtual-7192147023/0/cache/#nuxt-components-npm-2.1.8-189d4bc3ff-b2ab70da20.zip/node_modules/#nuxt/components/dist/index.js Why is this happening? I am running Node v14.16.0 and yarn v3.0.0 and I am on Linux Mint.
I might be a bit late to give an answer to the original person who asked but I think I found an answer to the problem. Here is what I could see after the first build fail using ls -lahA in a terminal: drwxrwxr-x 2 test test 4,0K 2022-06-29 15:50 components drwxrwxr-x 7 test test 4,0K 2022-06-29 15:50 .git drwxrwxr-x 2 test test 4,0K 2022-06-29 15:50 pages drwxrwxr-x 2 test test 4,0K 2022-06-29 15:50 static drwxrwxr-x 2 test test 4,0K 2022-06-29 15:50 store drwxrwxr-x 4 test test 4,0K 2022-06-29 15:50 .yarn -rw-rw-r-- 1 test test 207 2022-06-29 15:50 .editorconfig -rw-rw-r-- 1 test test 1,3K 2022-06-29 15:50 .gitignore -rw-rw-r-- 1 test test 1001 2022-06-29 15:50 nuxt.config.js -rw-rw-r-- 1 test test 394 2022-06-29 15:50 package.json -rwxr-xr-x 1 test test 1,1M 2022-06-29 15:50 .pnp.cjs <----- -rw-r--r-- 1 test test 8,5K 2022-06-29 15:50 .pnp.loader.mjs <----- -rw-rw-r-- 1 test test 2,8K 2022-06-29 15:50 README.md -rw-rw-r-- 1 test test 375K 2022-06-29 15:50 yarn.lock To solve the problem, I deleted the ".pnp.cjs" and ".pnp.loader.mjs" files and added a ".yarnrc.yml" file from an other project. The ".yarnrc.yml" file contains only one line : nodeLinker: node-modules then I wrote yarn in a terminal and everything worked, I could use yarn dev again and everything was ok. I hope it helped. Sorry for my english, it might not be perfect...
for me I simply run npm install then yarn install worked like a charm afterward.
is there a better way to query system tables across a clickhouse cluster?
We have a modest clickhouse cluster, ~30 nodes, and want to collect usage stats on it. We are hoping to do this using scheduled queries against the system tables, but using a normal query only get information on the one node you happen to be connected to, and creating a distributed table only works with the *log system tables. We can loop over the nodes, but don't want to do that. Is there a way to get all the instances of a system table, such as system.parts, in one query?
Distributed tables works with any type of tables and clusterAllReplicas as well. create table test on cluster replicated as system.processes Engine=Distributed(replicated, system, processes); SELECT FQDN(), elapsed FROM test ┌─FQDN()────────────────────┬────elapsed─┐ │ hos.mycmdb.net │ 0.00063795 │ └───────────────────────────┴────────────┘ SELECT FQDN(), elapsed FROM clusterAllReplicas(replicated, system, sessions); SELECT elapsed FROM clusterAllReplicas(replicated, system, processes) ┌─────elapsed─┐ │ 0.005636027 │ └─────────────┘ ┌─────elapsed─┐ │ 0.000228303 │ └─────────────┘ ┌─────elapsed─┐ │ 0.000275745 │ └─────────────┘ ┌─────elapsed─┐ │ 0.000311621 │ └─────────────┘ ┌─────elapsed─┐ │ 0.000270791 │ └─────────────┘ ┌─────elapsed─┐ │ 0.000288045 │ └─────────────┘ ┌─────elapsed─┐ │ 0.001048277 │ └─────────────┘ ┌─────elapsed─┐ │ 0.000256203 │ └─────────────┘
It can be used remote or remoteSecure functions that support multiple addresses: SELECT hostName() AS host, any(partition), count() FROM remote('node{01..30}-west.contoso.com', system, parts) GROUP BY host /* ┌─host──────────┬─any(partition)─┬─count()─┐ │ node01-west │ 202012 │ 733 │ .. │ node30-west │ 202012 │ 687 │ └───────────────┴────────────────┴─────────┘ */
For the record, we ended up using materialized views: CREATE MATERIALIZED VIEW _tmp.parts on cluster main_cluster engine = Distributed('main_cluster', 'system', 'parts', rand()) AS select * from system.parts
Could this leak be caused by the Android 11 DP2
I'm using the Android 11 DP2 on a Pixel 4 XL, and since then, I get this leak a lot. I suspect that it's caused by the developer preview, but I'm not entirely sure. I tried to search for this leak online, but I didn't find anything related. What do you think? ┬─── │ GC Root: System class │ ├─ android.app.ApplicationPackageManager class │ Leaking: NO (a class is never leaking) │ ↓ static ApplicationPackageManager.mHasSystemFeatureCache │ ~~~~~~~~~~~~~~~~~~~~~~ ├─ android.app.ApplicationPackageManager$1 instance │ Leaking: UNKNOWN │ Anonymous subclass of android.app.PropertyInvalidatedCache │ ↓ ApplicationPackageManager$1.mCache │ ~~~~~~ ├─ android.app.PropertyInvalidatedCache$1 instance │ Leaking: UNKNOWN │ Anonymous subclass of java.util.LinkedHashMap │ ↓ PropertyInvalidatedCache$1.tail │ ~~~~ ├─ java.util.LinkedHashMap$LinkedHashMapEntry instance │ Leaking: UNKNOWN │ ↓ LinkedHashMap$LinkedHashMapEntry.key │ ~~~ ├─ android.app.ApplicationPackageManager$HasSystemFeatureQuery instance │ Leaking: UNKNOWN │ ↓ ApplicationPackageManager$HasSystemFeatureQuery.this$0 │ ~~~~~~ ├─ android.app.ApplicationPackageManager instance │ Leaking: UNKNOWN │ ↓ ApplicationPackageManager.mContext │ ~~~~~~~~ ├─ android.app.ContextImpl instance │ Leaking: UNKNOWN │ ↓ ContextImpl.mAutofillClient │ ~~~~~~~~~~~~~~~ ╰→ com.example.app.ui.activities.SplashActivity instance Leaking: YES (ObjectWatcher was watching this because com.example.app.ui.activities.SplashActivity received Activity#onDestroy() callback and Activity#mDestroyed is true) key = 6a69a2a3-1d38-4d27-8c4c-cae915bea1b1 watchDurationMillis = 15093 retainedDurationMillis = 10089 METADATA Build.VERSION.SDK_INT: 29 Build.MANUFACTURER: Google LeakCanary version: 2.2 App process name: com.example.app Analysis duration: 4326 ms```
Yes, this is very likely to be an Android leak. No idea if it's new, but I haven't seen it before. Do you do anything special with auto fill? You should report it on the Android bug tracker, ideally with a sample project to reproduce it. If you can't reproduce easily, at least providing a link to a heap dump would help investigating. Based on the names involved in the leaktrace, if ApplicationPackageManager has an application scope (and therefore is not leaking) then ContextImpl.mAutofillClient is holding on to an activity reference for too long. The field is defined here: https://android.googlesource.com/platform/frameworks/base/+blame/master/core/java/android/app/ContextImpl.java#235 I haven't found any recent changes in autofill that would explain this leak. We can see in the source code of Activity that when an activity gets its base context attached, it sets itself as the autofill client for that base context: https://android.googlesource.com/platform/frameworks/base/+blame/master/core/java/android/app/Activity.java#1124 It never unsets itself, so either that's the mistake, or the base context is expected to have the same scope as the activity. Another thing that I find weird though is static ApplicationPackageManager.mHasSystemFeatureCache which means that ApplicationPackageManager has a static fields that starts with m (member field). That's a weird name, usually a mistake that doesn't happen in the android sources. And indeed I can't find it: https://android.googlesource.com/platform/frameworks/base/+blame/master/core/java/android/app/ApplicationPackageManager.java but maybe they haven't shared the updated sources yet? What device are you using this on?
julia MethodError: no method matching length(::WinRPM.RPMVersionNumber)
got this error running Julia v1.0.0 x64 upon win10 x64. Because of this I am unable to use any graphical libraries. Error: Error building Gtk: │ [ Info: Multiple package candidates found for mingw64(libjpeg-8.dll), picking newest. │ ERROR: LoadError: MethodError: no method matching length(::WinRPM.RPMVersionNumber) │ Closest candidates are: │ length(!Matched::Core.SimpleVector) at essentials.jl:571 │ length(!Matched::Base.MethodList) at reflection.jl:728 │ length(!Matched::Core.MethodTable) at reflection.jl:802
It not working currently. All you can do is to vote on these GitHub issues: https://github.com/JuliaGraphics/Gtk.jl/issues/369 https://github.com/JuliaPackaging/WinRPM.jl/issues/163 Good plotting alternative is Plots.jl with GR backend or PyPlot.jl (PyPlot.jl can be also used with Plots.jl for standarized API).
Coalesce function replacing in ClickHouse
I'm beginner in ClickHouse and trying to use for handling statistic of our project. Some desktop software send information to our servers and we need to group operation systems to short list. This example query: SELECT OS FROM Req GROUP BY OS ┌─OS──────────────────────────────────────────────────────────────────────────────┐ │ Майкрософт Windows 10 Корпоративная 2016 с долгосрочным обслуживанием │ │ Майкрософт Ознакомительная версия Windows Server 2012 Standard │ │ Майкрософт Windows 10 Домашняя для одного языка │ │ Microsoft Windows 8.1 Enterprise │ │ Майкрософт Windows 8 Корпоративная Прогрессивная │ │ Microsoft Windows Server 2008 R2 Standard │ │ Microsoft Windows 8.1 mit Bing │ │ Microsoft Windows 10 Home │ │ Microsoft Windows 8 Enterprise N │ │ Майкрософт Windows 8.1 Профессиональная │ │ Майкрософт Windows 8 Профессиональная │ │ Microsoft Windows 7 Rеактивная │ │ Microsoft Windows 10 Pro Insider Preview │ need to be aggregate to clean list: 8 xxx 8.1 yyy 2008 zzz 2008 R2 aaa and so on. I'm not found COALESCE function and try to using extract for identify OS by version numbers: select extract(OS, ' 7 ') || extract(OS, ' 8.1 ') || extract(OS, ' 10 ') || extract(OS, ' 2008 R2 ') || extract (OS, ' 2008 ') || extract(OS, ' 2012 R2 ') || extract(OS, ' 2012 ') as Value, count(distinct SID) from Req group by Value limit 100000; But! Because Windows 2008 and Windows 2008 R2 have '2008' in version string, i'm received this result: ┌─Value───────────┬─uniqExact(SID)─┐ │ │ 224 │ │ 2012 │ 17 │ │ 10 │ 1315 │ │ 7 │ 4282 │ │ 2008 │ 20 │ │ 2012 R2 2012 │ 57 │ │ 2008 R2 2008 │ 136 │ │ 8.1 │ 754 │ └─────────────────┴────────────────┘ What function I need to be use in my case? Thanks.
What you need here is a multif. If the string "2012 R2" is found, return me that, if "2012" return me that... etc. so in your case you could do something like this: multiIf(like(OS, '% 2008 R2 %'), extract(OS, ' 2008 R2 ') , like(OS, '% 2008 %'), extract (OS, ' 2008 '), 'OS_not_found') as Value This is basically an if else if and you can add as many values as you like, I just used the two because I didn't want to write too much, but in your case just add all the OS values you need. Its a bit verbose but it gets the job done. The function: like(OS, '% 2008 R2 %') returns true if the string is found and false otherwise, the "%" is the regex wildcard in clickhouse. Since the multif stops at the first match you will not get two extracted string in the same value.
Found! select OS, arrayFirst(x -> cast(position(OS, x) as UInt8), [' 8 ',' 8.1 ', '2008 R2', '2008']) from Req limit 1000; (without CAST i'm receive exeption: DB::Exception: Unexpected type of filter column., strange...)