I'm working on a Raspberry Pi running Buster with an Adafruit Ultimate GPS hat. I'm trying to get Chrony to temperature compensate. I've modified the chrony.conf file to contain.
# Uncomment the following line to turn logging on.
log measurements refclocks statistics tempcomp tracking
tempcomp /sys/class/hwmon/hwmon0/temp1_input 30 26000 0.0 0.000183 0.0
#tempcomp /sys/class/hwmon/hwmon0/temp1_input 30 /var/log/chrony/tempcomp.log
The system is currently adding measurements to the tempcomp.log file every 30 seconds. However, if I enable the second (commented out) tempcomp line above, chrony dies on restart with the error
Sep 6 12:31:45 rpi-tick2 chronyd[24713]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 -DEBUG)
Sep 6 12:31:45 rpi-tick2 chronyd[24713]: Frequency 23.662 +/- 0.165 ppm read from /var/lib/chrony/chrony.drift
Sep 6 12:31:45 rpi-tick2 chronyd[24713]: Fatal error : Could not read tempcomp point from /var/log/chrony/tempcomp.log
Sep 6 12:31:45 rpi-tick2 chronyd[24711]: Could not read tempcomp point from /var/log/chrony/tempcomp.log
I believe this is due to the fact that the tempcomp.log files has entries like
===========================================
Date (UTC) Time Temp. Comp.
===========================================
2021-09-06 17:40:47 5.2095e+04 4.7754e+00
2021-09-06 17:41:17 5.2582e+04 4.8645e+00
2021-09-06 17:41:47 5.2582e+04 4.8645e+00
2021-09-06 17:42:17 5.3069e+04 4.9536e+00
2021-09-06 17:42:47 5.2582e+04 4.8645e+00
Where chrony is expecting something like
20000 1.0
21000 0.64
22000 0.36
23000 0.16
24000 0.04
and sorted by temperature not sample time.
So it seems like I'm missing a step somewhere.
Also, once set up, is this a dynamic process where new datapoints are added as we go, or do we stop collecting data and just use the static table to compensate for temps?
Thanks for any insights.
I think you have to manually create a chrony.tempcomp file, likely by analyzing the tempcomp.log file. They are separate files. Then specify the chrony.tempcomp file like this:
tempcomp /sys/class/hwmon/hwmon0/temp2_input 30 /etc/chrony.tempcomp
Related
When viewing submitted jobs managed by Slurm, I would like to have the time limit column (specified by %l) to show only hours, instead of the usual days-hours:minutes:seconds format. This is the command I am currently using:
squeue --format="%.6i %.5P %.25j %.8u %.8T %.10M %.5l %.15b %.5C %.6D %R" --sort=+i --me
and this is the example output:
276350 qgpu jobname username RUNNING 1:14:14 1-00:00:00 gres:gpu:v100:1 18 1 s31n02
So, in this case, I would like the elapsed time to remain as is (1:14:14), but the time limit to change from 1-00:00:00 to 24. Is there a way to do it?
This is the way Slurm displays the dates. Elapsed time will eventually be displayed the same way (days-hours:minutes:seconds) after 23:59:59.
You can use a wrapper script to convert into a different format. Or if you know the time limit is no more than a day, just set the time limit to 23:59:00 by using --time=1439.
salloc -N1 --time=1439 bash
Using your squeue command:
166 mypartition interactive jyvet RUNNING 7:36 23:59:00 N/A 1 1 mynode
I have a fairly large query running on Clickhouse. The problem is when running on localhost using cmd line it takes about 0.7 sec to complete. This is consistently fast. Issue is when querying from C# / HTTP / Postman. Here it takes about 10 times to return the data. (the size is about 3-4mb) so I dont think its a size issue.
I have tried to monitor network latency, but nothing to notice here.
On the host it works like a charm, but outside it does not :(.... what to do.
I exptect the latency to be a few 100 ms, but turns out to be 7 sec :/
check timings with curl https://clickhouse.yandex/docs/en/interfaces/http/
https://stackoverflow.com/a/22625150
and compare local vs remote
CH HTTP usually provides almost the same performance as TCP and HTTP could be faster for small resultsets (like 10 rows)
Again. The problem is not the HTTP.
Example:
time clickhouse-client -q "select number, arrayMap(x->sipHash64(number,x), range(10)) from numbers(10000)" >native.out
real 0m0.034s
time curl -S -o http.out 'http://localhost:8123/?query=select%20number%2C%20arrayMap(x-%3EsipHash64(number%2Cx)%2C%20range(10))%20from%20numbers(10000)'
real 0m0.017s
ls -l http.out native.out
2108707 Oct 1 16:17 http.out
2108707 Oct 1 16:17 native.out
10 000 rows - 2Mb
HTTP is faster 0.017s VS 0.034s
Canada -> Germany (openvpn)
time curl -S -o http.out 'http://user:xxx#cl.host.x:8123/?query=select%20number%2C%20arrayMap(x-%3EsipHash64(number%2Cx)%2C%20range(10))%20from%20numbers(10000)'
real 0m1.619s
ping cl.host.x
PING cl.host.x (10.253.52.6): 56 data bytes
64 bytes from 10.253.52.6: icmp_seq=0 ttl=61 time=131.710 ms
64 bytes from 10.253.52.6: icmp_seq=1 ttl=61 time=133.711 ms
I would like to get the current timestamp of realtime clock in my BASH script. The hwclock command can print out the current time, but it is not timestamp.
I have considered parsing hwclock's output and then convert the results to a timestamp, but it seems the hwclock output will vary if the current locale gets changed. So if I give my script to my clients, they could use different locales than mine, and my parsing results could be wrong (and of course the timestamp will be wrong).
So my question is, what would be the best way to get timestamp from RTC? Thanks in advance.
When you call hwclock -D, you get an output like the following which contains the timestamp:
hwclock from util-linux 2.27.1
Using the /dev interface to the clock.
Assuming hardware clock is kept in UTC time.
Waiting for clock tick...
...got clock tick
Time read from Hardware Clock: 2018/09/27 09:03:46
Hw clock time : 2018/09/27 09:03:46 = 1538039026 seconds since 1969
Time since last adjustment is 1538039026 seconds
Calculated Hardware Clock drift is 0.000000 seconds
Thu 27 Sep 2018 11:03:45 CEST .749554 seconds
The following command parses this output to extract the actual timestamp:
sudo hwclock -D | grep "Hw clock time" | sed "s/.* = \([0-9]\+\) seconds .*/\1/"
Hi I am having an error saying "Train dataset for temp stage can not filled. Branch training terminated. Cascade Classifier can't be trained. check the used training parameters" when I am trying to do training. I used 50 positive and 100 negative images. I saw a similar question here . My bg.txt file is already in the form mentioned in that solution but still having the error.
my console output was as follows-
C:\Users\Administrator\Documents\Visual Studio 2010\Projects\cv_traincascade
\Debug>cv_traincascade.exe -data test -vec positives.vec -bg infofile.txt -numPos 50 -
numNeg 100 -numStages 20 -precalcValBufSize 1024 -precalcIdxBufSize 1024 -w 24 -h 24
PARAMETERS:
cascadeDirName: test
vecFileName: positives.vec
bgFileName: infofile.txt
numPos: 50
numNeg: 100
numStages: 20
precalcValBufSize[Mb] :1024
precalcIdxBufSize[Mb] :1024
stageType: BOOST
featureType: HAAR
sampleWidth: 24
sampleHeight: 24
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: BASIC
===== TRAINING 0-stage =====
<BEGIN
POS count : consumed 50 : 50
Train dataset for temp stage can not be filled. Branch training terminated.
Cascade classifier can't be trained. Check the used training parameters.
Can anyone please say what is wrong in my command? any help will be appreciated. thank you.
You have a problem with your Background dataset, otherwise it would move through the Bg Count and start the classifier. Check the file locations are all correct in your background file "infofile.txt"
EDIT: Upload a portion of your bg file.
Every night we dump and restore a 200 GB database using:
# Production, PG 9:
pg_dump DATNAME | some-irrelevant-pipe
# QA, PG 8.3:
some-irrelevant-pipe | psql -d DATNAME
I had to go for text-based backups in order to restore a dump from 9 on 8.3.
The restore is painfully and unreasonably slow. I noticed my log is full of these:
2011-05-22 08:02:47 CDT LOG: checkpoints are occurring too frequently (9 seconds apart)
2011-05-22 08:02:47 CDT HINT: Consider increasing the configuration parameter "checkpoint_segments".
2011-05-22 08:02:54 CDT LOG: checkpoints are occurring too frequently (7 seconds apart)
2011-05-22 08:02:54 CDT HINT: Consider increasing the configuration parameter "checkpoint_segments".
My question is: Is it possible that the setting of checkpoint_segments is the bottleneck? What other parameters can I tweak to speed up the process?
That machine has 4 GB RAM. Other possibly relevant settings in postgresql.conf are:
shared_buffers = 1000MB
work_mem = 200MB
maintenance_work_mem = 200MB
effective_cache_size = 2000MB
# fsync and checkpoint settings are default
Did you read this ? See specially sec 14.4.9
For the purposes of restoring a database, change:
# I don't think PostgreSQL 8.3 supports synchronous_commit
synchronous_commit = off
# only change fsync = off if your version of PG is too old to support synchronous_commit. If you do support synchronous_commit, don't ever change fsync to anything but on. Ever.
#fsync = off
checkpoint_segments = 25
Regarding checkpoint_segments, set that value to the size of your disk controller's write buffer. 25 = 400MB
Also, make sure your psql is loading everything in a single transaction:
some-irrelevant-pipe | psql -1 -d DATNAME