As the title says, I am trying to calculate the temperature of a cpu to use it in a conky. The acpi command is strangely not giving me a information about temperature of this laptop... So I am using a lm-sensor.
cho:~$ sensors
coretemp-isa-0000
Adapter: ISA adapter
Core 0: +54.0°C (high = +95.0°C, crit = +105.0°C)
Core 2: +57.0°C (high = +95.0°C, crit = +105.0°C)
First, I am not sure what Core 0 and Core 2 represent... I am thinking that they represent each core of my dual core cpu.
Will it be possible to have a one line code that can calculate the average of those temp and get
55.5°C
as an output?
Thanks in advance.
You can pipe your output with this awk:
awk '/^Core /{++r; gsub(/[^[:digit:]]+/, "", $3); s+=$3} END{print s/(10*r) "°C"}'
55.5°C
Related
I have an air gapped system (so limited in software access) that generates usage logs daily. The logs have unique ID's for devices that I've managed to scrape in the past and pump out to a CSV, to which I would then cleanup in LibreCalc (related to this question I asked here - https://superuser.com/questions/1732415/find-next-matching-event-in-log-and-compare-timings) and get event durations for each one.
This is getting arduous as more devices are added so I wish to automate the calculation of the total durations for each device, and how many events occurred for that device. I've had some suggestions of using out/awk/sed and I'm a bit lost on how to implement it.
Log Example
message="device02 connected" event_ts=2023-01-10T09:20:21Z
message="device05 connected" event_ts=2023-01-10T09:21:31Z
message="device02 disconnected" event_ts=2023-01-10T09:21:56Z
message="device04 connected" event_ts=2023-01-10T11:12:28Z
message="device05 disconnected" event_ts=2023-01-10T15:26:36Z
message="device04 disconnected" event_ts=2023-01-10T18:23:32Z
I already have a bash script that scrapes these events from the log files in the folder and then outputs it all to a csv.
#/bin/bash
#Just a datetime stamp for the flatfile
now=$(date +”%Y%m%d”)
#Log file path, also where I define what month to scrape
LOGFILE=’local.log-202301*’
#Shows what log files are getting read
echo $LOGFILE \n
#Output line by line to csv
awk ‘(/connect/ && ORS=”\n”) || (/disconnect/ && ORS=RS) {field1_var=$1” “$2” “$3”,”; print field1_var}’ $LOGFILE > /home/user/logs/LOG_$now.csv
Ideally I'd like to keep that process so I can manually inspect the file if necessary. But ultimately I'd prefer to automate the event calculations to produce something like below:
Desired Output Example
Device Total Connection Duration Total Connections
device01 0h 0m 0s 0
device02 0h 1m 35s 1
device03 0h 0m 0s 0
device04 7h 11m 4s 1
device05 6h 5m 5s 1
Hopefully thats enough info, any help or pointers would be greatly appreciated. Thanks.
This isn't based on your script at all, since I didn't get it to produce a CSV, but anyway...
Here's an AWK script that computes the desired result for the given example log file:
function time_lapsed(from, to) {
gsub(/[^0-9 ]/, " ", from);
gsub(/[^0-9 ]/, " ", to);
return mktime(to) - mktime(from);
}
BEGIN { OFS = "\t"; }
(/ connected/) {
split($1, a, "=\"", _);
split($3, b, "=", _);
device_connected_at[a[2]] = b[2];
device_connection_count[a[2]]++;
}
(/disconnected/) {
split($1, a, "=\"", _);
split($3, b, "=", _);
device_connection_duration[a[2]]+=time_lapsed(device_connected_at[a[2]], b[2]);
}
END {
print "Device","Total Connection Duration", "Total Connections";
for (device in device_connection_duration) {
print device, strftime("%Hh %Mm %Ss", device_connection_duration[device]), device_connection_count[device];
};
}
I used it on this example log file
message="device02 connected" event_ts=2023-01-10T09:20:21Z
message="device05 connected" event_ts=2023-01-10T09:21:31Z
message="device02 disconnected" event_ts=2023-01-10T09:21:56Z
message="device04 connected" event_ts=2023-01-10T11:12:28Z
message="device06 connected" event_ts=2023-01-10T11:12:28Z
message="device05 disconnected" event_ts=2023-01-10T15:26:36Z
message="device02 connected" event_ts=2023-01-10T19:20:21Z
message="device04 disconnected" event_ts=2023-01-10T18:23:32Z
message="device02 disconnected" event_ts=2023-01-10T21:41:33Z
And it produces this output
Device Total Connection Duration Total Connections
device02 03h 22m 47s 2
device04 08h 11m 04s 1
device05 07h 05m 05s 1
You can pass this program to awk without any flags. It should just work (given you didn't mess around with field and record separators somewhere in your shell session).
Let me explain what's going on:
First we define the time_lapsed function. In that function we first convert the ISO8601 timestamps into the format that mktime can handle (YYYY MM DD HH MM SS), we simply drop the offset since it's all UTC. We then compute the difference of the Epoch timestamps that mktime returns and return that result.
Next in the BEGIN block we define the output field separator OFS to be a tab.
Then we define two rules, one for log lines when the device connected and one for when the device disconnected.
Due to the default field separator the input to these rules looks like this:
$1: message="device02
$2: connected"
$3: event_ts=2023-01-10T09:20:21Z
We don't care about $2. We use split to get the device identifier and the timestamp from $1 and $3 respectively.
In the rule for a device connecting, using the device identifier as the key, we then store when the device connected and increase the connection count for that device. We don't need to initially assign 0 because the associative arrays in awk return "" for fields that contain no record which is coerced to 0 by incrementing it.
In the rule for a device disconnecting we compute the time lapsed and add that to the total time elapsed for that device.
Note that this requires every connect to have a matching disconnect in the logs. I.e., this is very fragile, a missing connect log line will mess up the calculation of the total connection time. A missing disconnect log line with increase the connection count but not the total connection time.
In the END rule we print the desired Output header and for every entry in the associative array device_connection_duration we print the device identifier, total connection duration and total connection count.
I hope this gives you some ideas on how to solve your task.
sinfo --format "%O" gives the load of nodes.
Is this an average value of a specific time period?
And how is this value related with the load averages (1m,5m,15m) of uptime command?
Thanks
Yes, it returns the 5min load average value.
SLURM uses sysinfo to measure the cpu load value (am using slurm 15.08.5).
In the source code of slurm, the following line measures the cpu load value.
float shift_float = (float) (1 << SI_LOAD_SHIFT);
if (sysinfo(&info) < 0) {
*cpu_load = 0;
return errno;
}
*cpu_load = (info.loads[1] / shift_float) * 100.0;
From the sysinfo man page:
unsigned long loads[3]; /* 1, 5, and 15 minute load averages */
info.loads[1] returns the 5min average. sysinfo reads from the file /proc/loadavg
To understand why SI_LOAD_SHIFT is used, please read the reference
I want to know if there is any way to limit the number of cpu usage by the user name in windows? For example, there are 8 cores and I want to limit the global cpu usage of a user to 6. So, he can not run more than 6 serial jobs (each use one core).
In Linux, that can be done via scripting. But I haven't see any similar thing even with powershell scripts. Does that mean, it can not be done?
The keyword for this is Affinity.
Affinity starts at 0 being first core.
Affinity is a bitmap
10000000 = first core
01000000 = second core
11000000 = first and second core
00100000 = third core
10100000 = first and third core
11100000 = first second and third core
function Set-Affinity([string]$Username,[int[]]$core){
[int]$affinty = 0
$core | %{ $affinty += [math]::pow(2,$_)}
get-process -IncludeUserName | ?{$_.UserName -eq $Username} | %{
$_.ProcessorAffinity = $affinty
}
}
Set-Affinity -username "TESTDOMAIN\TESTUSER" -core 0,1,2,3
I'm pretty new to coding. I'm trying to read a PT100 rtd via my Raspberry Pi 3. I read that I needed the Max31865 RTD amplifier to properly read the data because the resistances are so small. I am fairly certain I have it plugged in correctly.
I'm using this code, only slightly editted.
https://github.com/steve71/MAX31865
I'm getting two different outputs so far but it doesn't seem to correlate with anything I'm changing (The byte associated with the readTemp mostly) since I've run the same code twice and gotten both outputs. The outputs are as follows:
config register byte: ff
RTD ADC Code: 32767
PT100 Resistance: 429.986877 ohms
Straight Line Approx. Temp: 767.968750 degC
Callendar-Van Dusen Temp (degC > 0): 988.792111 degC
high fault threshold: 32767
low fault threshold: 32767
and
config register byte: 08
RTD ADC Code: 0
PT100 Resistance: 0.000000 ohms
Straight Line Approx. Temp: -256.000000 degC
Callendar-Van Dusen Temp (degC > 0): -246.861024 degC
high fault threshold: 0
low fault threshold: 0
Any help would be appreciated.
I'am dealing exactly with the same issue right now. Do you use your Pt100 with 3- or 4-wires?
I fixed the problem by setting the correct configuration status register in Line 78 of the original code (https://github.com/steve71/MAX31865) to 0xA2
self.writeRegister(0, 0xA2)
I am using 4-wires, so i had to change bit4 from 1 (3-wires) to 0 (2- or 4-wires)
0xb10100010
After this, i've got this as output
config register byte: 80
RTD ADC Code: 8333
PT100 Resistance: 101.721191 ohms
Straight Line Approx. Temp: 4.406250 degC
Callendar-Van Dusen Temp (degC > 0): 4.406808 degC
high fault threshold: 32767
low fault threshold: 0
Brrr... it's very cold in my room, isn't it? To fix this, i had to change the reference resistance in Line 170 to 430 Ohm
R_REF = 430.0 # Reference Resistor
It's curious, because i red a lot of times, there is a 400 Ohm resistance mounted on this devices as the reference. Indeed, on the SMD resistor is a 3-digit Code "431" which means 430 Ohm. Humm...
But now i have it nice and warm in here
Callendar-Van Dusen Temp (degC > 0): 25.091629 degC
Best regards
Did you get this resolved ? In case you didn't, the below python class method works for me. I remember that I had some trouble with wiring the force terminals, from memory for 2-wire you have to bridge both force terminals.
def _take_Resistance_Reading(self):
msg = '%s: taking resistance reading...' % self.Name
try:
self.Logger.debug(msg + 'entered method take_resistance_Reading()')
with self._RLock:
reg = self.spi.readbytes(9)
del reg[0] # delete 0th dummy data
self.Logger.debug("%s: register values: %s", self.Name, reg)
RTDdata = reg[1] << 8 | reg[2]
self.Logger.debug("%s: RTD data: %s", self.Name, hex(RTDdata))
ADCcode = RTDdata >> 1
self.Logger.debug("%s: ADC code: %s", self.Name, hex(ADCcode))
self.Vout = ADCcode
self._Resistance = round(ADCcode * self.Rref / 8192, 1)
self.Logger.debug(msg + "success, Vout: %s, resistance: %s Ohm" % (self.Vout, self._Resistance))
return True
except Exception as e:
I'm preparing a small presentation in Ipython where I want to show how easy it is to do parallel operation in Julia.
It's basically a Monte Carlo Pi calculation described here
The problem is that I can't make it work in parallel inside an IPython (Jupyter) Notebook, it only uses one.
I started Julia as: julia -p 4
If I define the functions inside the REPL and run it there it works ok.
#everywhere function compute_pi(N::Int)
"""
Compute pi with a Monte Carlo simulation of N darts thrown in [-1,1]^2
Returns estimate of pi
"""
n_landed_in_circle = 0
for i = 1:N
x = rand() * 2 - 1 # uniformly distributed number on x-axis
y = rand() * 2 - 1 # uniformly distributed number on y-axis
r2 = x*x + y*y # radius squared, in radial coordinates
if r2 < 1.0
n_landed_in_circle += 1
end
end
return n_landed_in_circle / N * 4.0
end
function parallel_pi_computation(N::Int; ncores::Int=4)
"""
Compute pi in parallel, over ncores cores, with a Monte Carlo simulation throwing N total darts
"""
# compute sum of pi's estimated among all cores in parallel
sum_of_pis = #parallel (+) for i=1:ncores
compute_pi(int(N/ncores))
end
return sum_of_pis / ncores # average value
end
julia> #time parallel_pi_computation(int(1e9))
elapsed time: 2.702617652 seconds (93400 bytes allocated)
3.1416044160000003
But when I do:
using IJulia
notebook()
And try to do the same thing inside the Notebook it only uses 1 core:
In [5]: #time parallel_pi_computation(int(10e8))
elapsed time: 10.277870808 seconds (219188 bytes allocated)
Out[5]: 3.141679988
So, why isnt Jupyter using all the cores? What can I do to make it work?
Thanks.
Using addprocs(4) as the first command in your notebook should provide four workers for doing parallel operations from within your notebook.
One way to solve this is to create a kernel that always uses 4 cores. For that some manual work is required. I assume that you are on a unix machine.
In the folder ~/.ipython/kernels/julia-0.x, you will find following kernel.json file:
{
"display_name": "Julia 0.3.9",
"argv": [
"/usr/local/Cellar/julia/0.3.9_1/bin/julia",
"-i",
"-F",
"/Users/ch/.julia/v0.3/IJulia/src/kernel.jl",
"{connection_file}"
],
"language": "julia"
}
If you copy the whole folder cp -r julia-0.x julia-0.x-p4, and modify the newly copied kernel.json file:
{
"display_name": "Julia 0.3.9 p4",
"argv": [
"/usr/local/Cellar/julia/0.3.9_1/bin/julia",
"-p",
"4",
"-i",
"-F",
"/Users/ch/.julia/v0.3/IJulia/src/kernel.jl",
"{connection_file}"
],
"language": "julia"
}
The paths will probably be different for you. Note that I only gave the kernel a new name and added the command line argument `-p 4.
You should see a new kernel named Julia 0.3.9 p4 which should always use 4 cores.
Also note that this kernel file will not get updated when you update IJulia, so you have to update it manually whenever you update julia or IJulia.
You can add new kernels using this command:
using IJulia
#for 4 cores
installkernel("Julia_4_threads", env=Dict("JULIA_NUM_THREADS"=>"4"))
#or for 8 cores
installkernel("Julia_8_threads", env=Dict("JULIA_NUM_THREADS"=>"8"))
After restart your VSCode this options will apear you your select kernel option.