How to create a graph using shell script - shell

I am writing data in a text file as below by running a script. This data is updated every second.
eth0: Sent Bytes: 1 Kb/s | Received Bytes: 2 Kb/s | Sent Packets: 18 Pkts/s | Received Packets: 13 Pkts/s
eth0: Sent Bytes: 1 Kb/s | Received Bytes: 2 Kb/s | Sent Packets: 18 Pkts/s | Received Packets: 12 Pkts/s
eth0: Sent Bytes: 1 Kb/s | Received Bytes: 3 Kb/s | Sent Packets: 20 Pkts/s | Received Packets: 13 Pkts/s
eth0: Sent Bytes: 15 Kb/s | Received Bytes: 4 Kb/s | Sent Packets: 33 Pkts/s | Received Packets: 25 Pkts/s
eth0: Sent Bytes: 1 Kb/s | Received Bytes: 3 Kb/s | Sent Packets: 19 Pkts/s | Received Packets: 12 Pkts/s
I want to make a graph of the # of bytes sent and the # of bytes received. Same for packets.

You can use https://github.com/holman/spark to create a graph with just shell script (although it only works with bash and not POSIX sh). You can watch it update in real time with watch.
graph.sh
!/bin/sh
field=1
tail "$1" | cut -d '|' -f $field | sed -e 's!.*: \([0-9]\+\) .*!\1!' | ./spark/spark
Interactive console
git clone https://github.com/holman/spark
your-process > logfile &
watch sh graph.sh logfile
Output
Every 2.0s: sh graph.sh logfile Fri Dec 19 22:22:04 2014
▁▁▁█▁

Related

inspecting buffer pool content using V$BH view appears unreliable

I executed this query on Oracle 19c where the partitioned table GRID_LTA has been assigned to KEEP POOL buffer and any segment related name has prefix GRID_LTA as well as primary key and indexes:
SQL> set timing on
SQL> set autotrace on
SQL> SELECT count(distinct t.AVG) n FROM MY_SCHEMA.GRID_LTA t;
N
--------------------------------------
308
Elapsed: 00:00:01.80
Execution Plan
----------------------------------------------------------
Plan hash value: 3595206837
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes| (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 |104K (1)| 00:00:05 | | |
| 1 | SORT AGGREGATE | | 1 | 13 | | | | |
| 2 | VIEW | VW_DAG_0 | 308 |4004|104K (1)| 00:00:05 | | |
| 3 | HASH GROUP BY | | 308 | 1232| 104K (1)| 00:00:05 | | |
| 4 | PARTITION LIST ALL | | 4979K| 18M| 104K (1)| 00:00:05 | 1 | 22 |
| 5 | TABLE ACCESS FULL| GRID_LTA | 4979K| 18M| 104K (1)| 00:00:05 | 1 | 22 |
--------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
3 recursive calls
0 db block gets
**376769 consistent gets
0 physical reads**
0 redo size
581 bytes sent via SQL*Net to client
588 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
After ten second I submitted this query to inspect the content of the V$BH view:
select o.OWNER,o.object_name, count(distinct block#) k1, count(block#) k2
from sys.dba_objects o, SYS.V_$BH b
where b.OBJD = o.OBJECT_ID
and b.status != 'free'
and o.owner = 'MY_SCHEMA'
and instr(o.object_name,'GRID_LTA') > 0
group by o.OWNER,o.object_name;
Statistics
----------------------------------------------------------
27 recursive calls
0 db block gets
1764 consistent gets
0 physical reads
0 redo size
574 bytes sent via SQL*Net to client
577 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
**0 rows processed**
Now it appears the table has completely been read from buffer pool but inspecting buffer pool there isn't any block.
Did I something wrong or is there a possibile explanation ?
How is it possible inspecting the buffer pool in reliable way ?
just solved, using data_object_id instead object_id how I've seen abroad in some examples.

ATQA in Mifare and RFU configurations

I received some cards that are supposed to be Mifare Classic cards.
When I perform the level 1 of anticollision (REQA) the ATQA is 04 00.
According to ISO/IEC 14443-3, the first byte of the ATQA is RFU and the second part defines the UID size and anticollision bits. The value 00000 for the anticollision is defined as RFU.
But according to NXP MIFARE Type Identification Procedure (AN10833) byte 1 of the ATQA can be 0x04. Anyways I can not find MIFARE Classic hex value for the ATQA compatible in the previously mentioned document. Also the document MF1S50YYX_V1 states that the hex value of the ATQA of a Mifare Classic card should be 00 xx.
I am pretty sure that I am reading the ATQA in the correct endianess (I am able to perform a full anticollision procedure) so I can't figure what's going on with the ATQA. Any hint would be very appreciated.
I'm not sure I understand your problem. 04 00 looks like a valid ATQA for MIFARE Classic under the assumption that the octets are ordered in transmission byte order (lower byte first).
The coding of the ATQA according to ISO/IEC 14443-3 is:
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----
| 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 |
| RFU | PROPR. CODING | UID SIZE | RFU | BIT FRAME ANTICOLLISION |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----
Since bits 8..1 are the LSB (first transmitted octet) and bits 16..9 are the MSB (second transmitted octet), your ATQA would map to:
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----
| 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 |
| RFU | PROPR. CODING | UID SIZE | RFU | BIT FRAME ANTICOLLISION |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----
| 0x00 | 0x04 |
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----
So your MIFARE Classic card could be either Classic 1K or Mini (or Plus) with a 4-byte (N)UID. Note that you should not rely on the ATQA to detect UID length and chip type though (this should be done through selection and evaluation of the SAK value).

oracle filter on explain plan partitions

I'm making a proof of concept and I'm experimenting a strange behaviour.
I have a table partitioned by range by a date field and the cost of a query changes a lot if I set a fixed date or one created by SYSDATE.
These are the explain plans:
SQL> SELECT *
2 FROM TP_TEST_ELEMENTO_TRAZABLE ET
3 WHERE ET.FEC_RECEPCION
4 BETWEEN TRUNC(SYSDATE-2) AND TRUNC(SYSDATE-1)
5 ;
5109 filas seleccionadas.
Plan de Ejecuci¾n
----------------------------------------------------------
Plan hash value: 1151442660
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5008 | 85136 | 4504 (8)| 00:00:55 | | |
|* 1 | FILTER | | | | | | | |
| 2 | PARTITION RANGE ITERATOR| | 5008 | 85136 | 4504 (8)| 00:00:55 | KEY | KEY |
|* 3 | TABLE ACCESS FULL | TP_TEST_ELEMENTO_TRAZABLE | 5008 | 85136 | 4504 (8)| 00:00:55 | KEY | KEY |
------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(TRUNC(SYSDATE#!-2)<=TRUNC(SYSDATE#!-1))
3 - filter("ET"."FEC_RECEPCION">=TRUNC(SYSDATE#!-2) AND "ET"."FEC_RECEPCION"<=TRUNC(SYSDATE#!-1))
EstadÝsticas
----------------------------------------------------------
1 recursive calls
0 db block gets
376 consistent gets
0 physical reads
0 redo size
137221 bytes sent via SQL*Net to client
4104 bytes received via SQL*Net from client
342 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5109 rows processed
Using fixed dates:
SQL> SELECT *
2 FROM TP_TEST_ELEMENTO_TRAZABLE ET
3 WHERE ET.FEC_RECEPCION
4 BETWEEN TO_DATE('26/02/2017', 'DD/MM/YYYY') AND TO_DATE('27/02/2017', 'DD/MM/YYYY')
5 ;
5109 filas seleccionadas.
Plan de Ejecuci¾n
----------------------------------------------------------
Plan hash value: 3903280660
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5008 | 85136 | 11 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE ITERATOR| | 5008 | 85136 | 11 (0)| 00:00:01 | 607 | 608 |
|* 2 | TABLE ACCESS FULL | TP_TEST_ELEMENTO_TRAZABLE | 5008 | 85136 | 11 (0)| 00:00:01 | 607 | 608 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("ET"."FEC_RECEPCION"<=TO_DATE(' 2017-02-27 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
EstadÝsticas
----------------------------------------------------------
1 recursive calls
0 db block gets
376 consistent gets
0 physical reads
0 redo size
137221 bytes sent via SQL*Net to client
4104 bytes received via SQL*Net from client
342 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5109 rows processed
What's the difference that produces a cost of 4504 and a cost of 11?
Thanks in advance :)
The difference is because when you use SYSDATE, it has the potential to need any partition. For example, if you are daily partitioned, then the partition you need to access will be different between today and tomorrow. As such, the plan is KEY:KEY, meaning that the actual partition is resolved at runtime.
With a fixed date, we know at compile time which partition it resolves to. And since it resolves to a single partition, it's more "accurately" costed.

Cassandra Write getting slow in heavy writes - Load Factor surges up on one machine in the cluster

We are using cassandra 3.0.3 on AWS with 6 r3.xlarge machines (64G RAM, 16 Core) each, there are 6 machines in 2 datacenter's but this particular keyspace is replicated in only one DC therefore on 3 Nodes. We are writing about 300M rows into cassandra as a weekly sync.
During loading data load factor shooting up to as much as 34 on a machine and 100% CPU utilization (In this case a lot of data will be rewritten), we expected it to be slow but the performance degradation is dramatic on one of the nodes.
At a snapshot, load factor output for the machines:
On Overloaded Machine:
27.47, 29.78, 30.06
On other two:
2.65, 3.95, 4.59
3.76, 2.52, 2.50
nodetool status output:
Overloaded Node:
UN 10.21.56.21 65.94 GB 256 38.7% 57f35206-f264-44ec-b588-f72883139f69 rack1
Other two Nodes:
UN 10.21.56.20 56.34 GB 256 31.9% 2b29f85c-c783-4e20-8cea-95d4e2688550 rack1
UN 10.21.56.23 51.29 GB 256 29.4% fbf26f1d-1766-4f12-957c-7278fd19c20c rack1
I can see that the sstable count is also high and sstable flushed are ~15MB in size. Heap size is 8GB and G1GC is used.
The output of nodetool cfhistograms shows stark difference between write and read latency as can be shown below for one of the larger tables:
| Percentile | SSTables | Write Latency | Read Latency | Partition Size | Cell Count |
|------------- |------------ |----------------- |---------------- |------------------ |-------------- |
| | (micros) | (micros) | (bytes) | | |
| 50% | 8 | 20.5 | 1629.72 | 179 | 5 |
| 75% | 10 | 24.6 | 2346.8 | 258 | 10 |
| 95% | 12 | 42.51 | 4866.32 | 1109 | 72 |
| 98% | 14 | 51.01 | 10090.81 | 3973 | 258 |
| 99% | 14 | 61.21 | 14530.76 | 9887 | 642 |
| Min | 0 | 4.77 | 11.87 | 104 | 5 |
| Max | 17 | 322381.14 | 17797419.59 | 557074610 | 36157190 |
nodetool proxyhistogram output can be found below:
Percentile Read Latency Write Latency Range Latency
(micros) (micros) (micros)
50% 263.21 654.95 20924.30
75% 654.95 785.94 30130.99
95% 1629.72 36157.19 52066.35
98% 4866.32 155469.30 62479.63
99% 7007.51 322381.14 74975.55
Min 6.87 11.87 24.60
Max 12359319.16 30753941.06 63771372.18
One wierd thing that I can observe here is that Mutation count vary by considerable margin per machine :
MutationStage Pool Completed Total:
Overloaded Node: 307531460526
Other Node1: 77979732754
Other Node2: 146376997379
Here overloaded node total = ~4x Other Node1 and ~2x Other Node2. In a well distributed keyspace with MM3 partitioner is this scenario expected?
nodetool cfstats output is attached below for reference:
Keyspace: cat-48
Read Count: 122253245
Read Latency: 1.9288832487759324 ms.
Write Count: 122243273
Write Latency: 0.02254735837284069 ms.
Pending Flushes: 0
Table: bucket_distribution
SSTable count: 11
Space used (live): 10149121447
Space used (total): 10149121447
Space used by snapshots (total): 0
Off heap memory used (total): 14971512
SSTable Compression Ratio: 0.637019014259346
Number of keys (estimate): 2762585
Memtable cell count: 255915
Memtable data size: 19622027
Memtable off heap memory used: 0
Memtable switch count: 487
Local read count: 122253245
Local read latency: 2.116 ms
Local write count: 122243273
Local write latency: 0.025 ms
Pending flushes: 0
Bloom filter false positives: 17
Bloom filter false ratio: 0.00000
Bloom filter space used: 9588144
Bloom filter off heap memory used: 9588056
Index summary off heap memory used: 3545264
Compression metadata off heap memory used: 1838192
Compacted partition minimum bytes: 104
Compacted partition maximum bytes: 557074610
Compacted partition mean bytes: 2145
Average live cells per slice (last five minutes): 8.83894307680672
Maximum live cells per slice (last five minutes): 5722
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
----------------
Also I can observe in nodetool tpstats that on peak load one node (which is getting overloaded) has pending Native-Transport-Requests:
Overloaded Node:
Native-Transport-Requests 32 11 651595401 0 349
MutationStage 32 41 316508231055 0 0
The other two:
Native-Transport-Requests 0 0 625706001 0 495
MutationStage 0 0 151442471377 0 0
Native-Transport-Requests 0 0 630331805 0 219
MutationStage 0 0 78369542703 0 0
I have also checked nodetool compactionstats and the output is 0 most of the time, at times when compaction is happen, it is observed that load doesnt increase alarmingly.
Traced it down to issue with Data Model & a Kernel bug which was not patched in the kernel we used.
Some partitions in the data that we were writing were large that caused imbalance in the write requests, since RF is 1 so One server appeared to be under heavy load.
The kernel issue is described in detail here (in brief it affects java apps which are using park wait): datastax blog
This is Fixed by Linux Commit

how to extract ping parameters from a file in bash script

I have a ping file like this
PING 172.17.9.1 (172.17.9.1) 1000(1028) bytes of data.
1008 bytes from 172.17.9.1: icmp_seq=1 ttl=64 time=0.943 ms
1008 bytes from 172.17.9.1: icmp_seq=2 ttl=64 time=0.855 ms
1008 bytes from 172.17.9.1: icmp_seq=3 ttl=64 time=0.860 ms
.
.
--- 172.17.9.1 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 9958ms
rtt min/avg/max/mdev = 0.836/1.710/37.591/4.498 ms
I want to extract the packet loss, time, average rtt with bash scripting. what should I do?!
Thanks
You can awk as
$ awk -F"[,/]" '/packet loss/{print $3} /rtt/{print " rtt",$2,$5}' input
0% packet loss
rtt avg 1.710
Try something like:
awk -F',|/' '/time/{x=$3$4}/rtt/{print x " " $5}' ping.txt | sed 's/[^0-9 .]*//g'
Output:
0 958 1.710

Resources