I want to sort these number
36 ms
4 ms
44 ms
8 ms
like this
4 ms
8 ms
36 ms
44 ms.
using the sort command in linux. Thanks
16:59:52.092 - 16:59:52.121 PhysicalSharedChannelReconfigurationRequestFDD - PhysicalSharedChannelReconfigurationResponse 29 ms
16:59:51.940 - 16:59:51.943 PhysicalSharedChannelReconfigurationRequestFDD - PhysicalSharedChannelReconfigurationResponse 3 ms
16:59:52.092 - 16:59:52.130 PhysicalSharedChannelReconfigurationRequestFDD - PhysicalSharedChannelReconfigurationResponse 38 ms
16:59:52.029 - 16:59:52.068 PhysicalSharedChannelReconfigurationRequestFDD -
PhysicalSharedChannelReconfigurationResponse 39 ms
16:59:52.092 - 16:59:52.133 PhysicalSharedChannelReconfigurationRequestFDD - PhysicalSharedChannelReconfigurationResponse 41 ms
17:59:34.248 - 17:59:34.253 PhysicalSharedChannelReconfigurationRequestFDD - PhysicalSharedChannelReconfigurationResponse 5 ms
18:14:39.263 - 18:14:39.268 PhysicalSharedChannelReconfigurationRequestFDD - PhysicalSharedChannelReconfigurationResponse 5 ms
19:41:59.355 - 19:41:59.360 PhysicalSharedChannelReconfigurationRequestFDD - PhysicalSharedChannelReconfigurationResponse 5 m
echo '36 ms 4 ms 44 ms 8 ms' | xargs -n 2 | sort -n -k1 | tr '\n' ' '
does the trick using UNIX sort, but you have to pass through the intermediate steps of splitting and re-composing the input
Related
I have a table as below, and I need select to return the value per minute within the current quarter. For example, if now it's 15:19, I need select to return TIMESTAMP and the value in this quarter between 15:15 and 15:30.
That is, I need select to return the last minutes of the current quarter of an hour. DB is ORACLE.
TIMESTAMP | VALUE
11/11/2019 15:09 | 45
11/11/2019 15:10 | 10
11/11/2019 15:11 | 15
11/11/2019 15:12 | 35
11/11/2019 15:13 | 55
11/11/2019 15:14 | 25
11/11/2019 15:15 | 20
11/11/2019 15:16 | 22
11/11/2019 15:17 | 12
11/11/2019 15:18 | 10
11/11/2019 15:19 | 21
I have tried TRUNC, but no success.
You need trunc by 15 mins.
You can do it using following logic:
Select * from your_table
Where your_timestamp_col
between trunc(systimestamp,'dd') + floor(to_char(systimestamp,'sssss.ff') / 900) / 96
And trunc(systimestamp,'dd') + ceil(to_char(systimestamp,'sssss.ff') / 900) / 96
Here, 900 represent seconds in 15 mins and 96 represents total such quarters in a day (24 hours * 4 quarters = 96)
Cheers!!
I was thinking, if it was possible to use GROUP BY based on the data of a certaint column in a expecific way, instead of the column. So my question is can i create groups based on the 0 occurence of a certant field.
DIA MES YEAR TODAY TOMORROW ANALYSIS LIMIT
---------- ---------- ---------- ---------- ---------- ---------- ----------
19 9 2016 111 988 0 150
20 9 2016 988 853 853 150
21 9 2016 853 895 895 150
22 9 2016 895 776 776 150
23 9 2016 776 954 0 150
26 9 2016 954 968 968 150
27 9 2016 968 810 810 150
28 9 2016 810 937 937 150
29 9 2016 937 769 769 150
30 9 2016 769 1020 0 150
3 10 2016 1020 923 923 150
4 10 2016 923 32 32 150
Like, in this case, i would want to create groups, like this:
Group 1 (Analysis): 0
Group 2(Analysis): 853, 895,776,0
Group 3(Analysis): 968,810,937,169,0
...
Assuming your table name is tbl, something like this should work (it's called "start-of-group" method if you want to Google it):
select
from ( select tbl.*,
count(case when analysis = 0 then 1 end)
over (order by year, mes, dia) as cnt
from tbl
)
where ...
GROUP BY cnt
;
So I've started a simple test,
cbs-pillowfight -h localhost -b default -i 1 -I 10000 -T
Got:
[10717.252368] Run
+---------+---------+---------+---------+
[ 20 - 29]us |## - 257
[ 30 - 39]us |# - 106
[ 40 - 49]us |###################### - 2173
[ 50 - 59]us |################ - 1539
[ 60 - 69]us |######################################## - 3809
[ 70 - 79]us |################ - 1601
[ 80 - 89]us |## - 254
[ 90 - 99]us |# - 101
[100 - 109]us | - 43
[110 - 119]us | - 17
[120 - 129]us | - 48
[130 - 139]us | - 23
[140 - 149]us | - 14
[150 - 159]us | - 5
[160 - 169]us | - 5
[170 - 179]us | - 1
[180 - 189]us | - 3
[210 - 219]us | - 1
[270 - 279]us | - 1
+----------------------------------------
Then, a cluster was created by adding this node to another i7 node.
'Default' bucket is definitely smaller than 1Gb, it has 1 replica and 2 writers, flush is not set.
Now, same command produces (both hosts used ):
50% in 100-200 ns, 1% in 200-900 ns, 49% in 900ns to "1 to 9 ms!" WTF.
After adding -r (ratio) switch set to 90% SETs,
25% in 100-200ns, 74% in 900ns, remaining in 900ns to "1 to 9 ms!"
So it seems that write performance suffers much in clustered mode; why it can be such a large, 10x drop? Network is clean, there are no highload services running..
UPD1.
Forgot to add the ideal case: -r 100.
25% in 100-200 ns, 74% in 900 ns.
This makes me think, that:
A) benchmark code is blocking somewhere (quick reading shown no signs)
B) server is doing some non-logged magic on SETs I can't understand to reconfigure. Replication factor? Isn't that a nonsense for a small dataset? That's what I'm trying to ask here.
C) network problem. But wireshark shows nothing.
UPD2.
Stopped both nodes, moved them to tmpfs.
For a "normal" responses, got 20ns improval. But slow responses remain slow.
..[cut]
50 - 59]us |## - 164
[ 60 - 69]us |#### - 321
[ 70 - 79]us |######## - 561
[ 80 - 89]us |########## - 701
[ 90 - 99]us |############ - 844
[100 - 109]us |########## - 717
[110 - 119]us |####### - 514
[120 - 129]us |##### - 336
[130 - 139]us |### - 230
[140 - 149]us |## - 175
[150 - 159]us |## - 135
[160 - 169]us |# - 81
..[cut]
[930 - 939]us | - 24
[940 - 949]us |## - 139
[950 - 959]us |##### - 339
[960 - 969]us |####### - 474
[970 - 979]us |####### - 534
[980 - 989]us |###### - 467
[990 - 999]us |##### - 342
[ 1 - 9]ms |######################################## - 2681
[ 10 - 19]ms | - 1
..[cut]
UPD3: screenshot.
Problem is "solved" by switching to three-node configuration on gigabit network.
I have following result while running below powershell command,
PS C:\> Get-Process svchost
Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName
------- ------ ----- ----- ----- ------ -- -----------
546 34 18528 14884 136 49.76 260 svchost
357 14 4856 4396 47 18.05 600 svchost
314 17 6088 5388 42 12.62 676 svchost
329 17 10044 8780 50 12.98 764 svchost
1515 49 36104 38980 454 232.04 812 svchost
301 33 9736 6428 54 2.90 832 svchost
328 26 8844 9744 52 4.32 856 svchost
247 18 8144 9912 77 37.50 904 svchost
46 5 1504 968 14 0.02 1512 svchost
278 15 4048 5660 43 3.88 2148 svchost
98 14 2536 2460 35 0.66 2504 svchost
Here im trying to calculte the total memory size PM(K) of process(s).i've following line in my ps1 script file
get-process svchost | foreach {$mem=("{0:N2}MB " -f ($_.pm/1mb))}
It gives the output in the following format
17.58MB 4.79MB 6.05MB 9.99MB 35.29MB 9.56MB 8.64MB 7.95MB 1.47MB 3.95MB 2.48MB
but i need total size as a single value like 107.75MB
How to calculate the total used memory size of svchost process ?
Thanks
You can use the Measure-Object cmdlet
$measure = Get-Process svchost | Measure-Object PM -Sum
$mem = ("{0:N2}MB " -f ($measure.sum / 1mb))
Also, you can calculate the total size of the entire collection using the += syntax
$mem = 0
Get-Process svchost | %{$mem += $_.pm}
"{0:N2}MB " -f ($mem/1mb)
I am looking into a heap, which has many few allocations and number of entries are much more than 20, which is the default of !heap -stat -h command. For example, if you see below, the numbers don't add up to 100. Is there any way I can get all the entries in that heap?
!heap -stat -h 0000000006eb0000
heap # 0000000006eb0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
3a00 92e - 2146c00 (1.11)
27da8 c0 - 1de3e00 (1.00)
4fb48 5c - 1ca4de0 (0.95)
3bc78 6e - 19afb90 (0.86)
14c18 127 - 17eafa8 (0.80)
778e8 2b - 1414ef8 (0.67)
6f30 29d - 1229070 (0.61)
13ed8 a5 - cd8138 (0.43)
4c00 2a0 - c78000 (0.42)
10a18 a4 - aa7760 (0.36)
63a18 1a - a1e670 (0.34)
18e18 61 - 96d718 (0.31)
9f688 c - 778e60 (0.25)
20 3551e - 6aa3c0 (0.22)
a0 a776 - 68a9c0 (0.22)
8b7b8 b - 5fe4e8 (0.20)
1e08 2b0 - 50b580 (0.17)
30 168fc - 43af40 (0.14)
a898 60 - 3f3900 (0.13)
18 287ae - 3cb850 (0.13)
-Thanks,
Brajesh
You can increase this total by specifying the group by parameter followed by a number so for example:
!heap -stat -h 07300000 -grp A 0n100
gives output:
0:275> !heap -stat -h 07300000 -grp A 0n100
heap # 07300000 group-by: ALLOCATIONSIZE max-display: 100
size #blocks total ( %) (percent of total busy bytes)
7ecc10 1 - 7ecc10 (41.60)
1fc210 1 - 1fc210 (10.42)
1fb310 1 - 1fb310 (10.40)
17d110 1 - 17d110 (7.81)
2c4e0 2 - 589c0 (1.82)
2b330 1 - 2b330 (0.89)
20420 3 - 60c60 (1.98)
20020 4 - 80080 (2.63)
14320 1 - 14320 (0.41)
10020 1 - 10020 (0.33)
fab8 1 - fab8 (0.32)
eb4c 2 - 1d698 (0.60)
c020 1 - c020 (0.25)
9c60 4c - 2e6c80 (15.23)
82c0 3 - 18840 (0.50)
8020 3 - 18060 (0.49)
6420 1 - 6420 (0.13)
5ea0 1 - 5ea0 (0.12)
517c 1 - 517c (0.10)
4f40 1 - 4f40 (0.10)
4ba4 1 - 4ba4 (0.10)
4750 1 - 4750 (0.09)
4020 2 - 8040 (0.16)
3f78 1 - 3f78 (0.08)
2c38 1 - 2c38 (0.06)
25d8 1 - 25d8 (0.05)
21dc 1 - 21dc (0.04)
2040 1 - 2040 (0.04)
2020 3 - 6060 (0.12)
1de0 1 - 1de0 (0.04)
1da8 10 - 1da80 (0.61)
1b6c 3 - 5244 (0.11)
19f0 1 - 19f0 (0.03)
18e4 2 - 31c8 (0.06)
1890 1 - 1890 (0.03)
183c 2 - 3078 (0.06)
1820 1 - 1820 (0.03)
15e8 1 - 15e8 (0.03)
1560 1 - 1560 (0.03)
151c 2 - 2a38 (0.05)
14b0 1 - 14b0 (0.03)
1384 1 - 1384 (0.03)
1098 1 - 1098 (0.02)
102c 3 - 3084 (0.06)
1020 2 - 2040 (0.04)
101f 1 - 101f (0.02)
101c 1 - 101c (0.02)
Will dump the handles for that heap, grouped by allocation size for a max of 100 rows (0n specifies we are decimal based, without that prefix it becomes a hexidecimal value)
See this link for details of !heap