Running piped bash script - bash

I need to execute my bash script (output.sh) as piped script.
see below.
echo "Dec 10 03:39:13 cgnat2.dd.com 1 2015 Dec 9 14:39:11 01-g3-adsl - - NAT44 - [UserbasedW - 100.70.24.236 vrf-testnet - 222.222.34.65 - 3072 4095 - - ][UserbasedW - 100.70.25.9 vrf-testnet - 222.222.34.65 - 16384 17407 - - ][UserbasedW - 100.70.25.142 vrf-testnet - 222.222.34.69 - 9216 10239 - - ]" | ./output.sh
how can i get echoing text in to my output.sh file and I need to split echoing text using [
output should be
[UserbasedW - 100.70.24.236 vrf-testnet - 222.222.34.65 - 3072 4095 - - ]
[UserbasedW - 100.70.25.9 vrf-testnet - 222.222.34.65 - 16384 17407 - - ]
[UserbasedW - 100.70.25.142 vrf-testnet - 222.222.34.69 - 9216 10239 - - ]
please help me. i have no idea.. :(

With grep:
| grep -o '\[[^]]*\]'
or with GNU grep:
| grep -oP '\[.*?\]'
Output:
[UserbasedW - 100.70.24.236 vrf-testnet - 222.222.34.65 - 3072 4095 - - ]
[UserbasedW - 100.70.25.9 vrf-testnet - 222.222.34.65 - 16384 17407 - - ]
[UserbasedW - 100.70.25.142 vrf-testnet - 222.222.34.69 - 9216 10239 - - ]
With a bash script (e.g. output.sh):
#!/bin/bash
grep -o '\[[^]]*\]'
Usage:
echo "... your string ..." | ./output.sh
See: The Stack Overflow Regular Expressions FAQ

If it is removing header before first [, use a sed between before your pipe (assuming your echo is a sample of other source)
echo "Dec 10 03:39:13 cgnat2.dd.com 1 2015 Dec 9 14:39:11 01-g3-adsl - - NAT44 - [UserbasedW - 100.70.24.236 vrf-testnet - 222.222.34.65 - 3072 4095 - - ][UserbasedW - 100.70.25.9 vrf-testnet - 222.222.34.65 - 16384 17407 - - ][UserbasedW - 100.70.25.142 vrf-testnet - 222.222.34.69 - 9216 10239 - - ]" \
| sed 's/^[^[]*//' \
| ./output.sh

Related

WebSockets Message Frame Length should be 5, but am getting 33

So I'm trying to decode a websocket message frame at the moment, one such example,
0x81 0x85 0x37 0xfa 0x21 0x3d 0x7f 0x9f 0x4d 0x51 0x58 (contains "Hello")
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-------+-+-------------+-------------------------------+
|F|R|R|R| opcode|M| Payload len | Extended payload length |
|I|S|S|S| (4) |A| (7) | (16/64) |
|N|V|V|V| |S| | (if payload len==126/127) |
| |1|2|3| |K| | |
+-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - +
| Extended payload length continued, if payload len == 127 |
+ - - - - - - - - - - - - - - - +-------------------------------+
| |Masking-key, if MASK set to 1 |
+-------------------------------+-------------------------------+
| Masking-key (continued) | Payload Data |
+-------------------------------- - - - - - - - - - - - - - - - +
: Payload Data continued ... :
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
| Payload Data continued ... |
+---------------------------------------------------------------+
So the payload bits are coming out as 0100001 = 33. It should be 5 surely? Or Have I missed something here?
Edit: The value of the header bits to the end of "Payload len" are,
1000000110100001
[1][0][0][0][0001][1][0100001]
Nick.
Okay, I'm being stupid, it's in bits, not bytes!

Couchbase: possible reasons for 10x difference in cbs-pillowfight latency test, when running in a cluster mode

So I've started a simple test,
cbs-pillowfight -h localhost -b default -i 1 -I 10000 -T
Got:
[10717.252368] Run
+---------+---------+---------+---------+
[ 20 - 29]us |## - 257
[ 30 - 39]us |# - 106
[ 40 - 49]us |###################### - 2173
[ 50 - 59]us |################ - 1539
[ 60 - 69]us |######################################## - 3809
[ 70 - 79]us |################ - 1601
[ 80 - 89]us |## - 254
[ 90 - 99]us |# - 101
[100 - 109]us | - 43
[110 - 119]us | - 17
[120 - 129]us | - 48
[130 - 139]us | - 23
[140 - 149]us | - 14
[150 - 159]us | - 5
[160 - 169]us | - 5
[170 - 179]us | - 1
[180 - 189]us | - 3
[210 - 219]us | - 1
[270 - 279]us | - 1
+----------------------------------------
Then, a cluster was created by adding this node to another i7 node.
'Default' bucket is definitely smaller than 1Gb, it has 1 replica and 2 writers, flush is not set.
Now, same command produces (both hosts used ):
50% in 100-200 ns, 1% in 200-900 ns, 49% in 900ns to "1 to 9 ms!" WTF.
After adding -r (ratio) switch set to 90% SETs,
25% in 100-200ns, 74% in 900ns, remaining in 900ns to "1 to 9 ms!"
So it seems that write performance suffers much in clustered mode; why it can be such a large, 10x drop? Network is clean, there are no highload services running..
UPD1.
Forgot to add the ideal case: -r 100.
25% in 100-200 ns, 74% in 900 ns.
This makes me think, that:
A) benchmark code is blocking somewhere (quick reading shown no signs)
B) server is doing some non-logged magic on SETs I can't understand to reconfigure. Replication factor? Isn't that a nonsense for a small dataset? That's what I'm trying to ask here.
C) network problem. But wireshark shows nothing.
UPD2.
Stopped both nodes, moved them to tmpfs.
For a "normal" responses, got 20ns improval. But slow responses remain slow.
..[cut]
50 - 59]us |## - 164
[ 60 - 69]us |#### - 321
[ 70 - 79]us |######## - 561
[ 80 - 89]us |########## - 701
[ 90 - 99]us |############ - 844
[100 - 109]us |########## - 717
[110 - 119]us |####### - 514
[120 - 129]us |##### - 336
[130 - 139]us |### - 230
[140 - 149]us |## - 175
[150 - 159]us |## - 135
[160 - 169]us |# - 81
..[cut]
[930 - 939]us | - 24
[940 - 949]us |## - 139
[950 - 959]us |##### - 339
[960 - 969]us |####### - 474
[970 - 979]us |####### - 534
[980 - 989]us |###### - 467
[990 - 999]us |##### - 342
[ 1 - 9]ms |######################################## - 2681
[ 10 - 19]ms | - 1
..[cut]
UPD3: screenshot.
Problem is "solved" by switching to three-node configuration on gigabit network.

plot matrix non-numeric points in different colors using gnuplot

I have a file 'matrix.dat' looks like this:
10584 179888 115816 16768 91440 79928 50656 23624 21712 51776 89670 21815 13536 18984 11997 16221 10336 432 632 2024 - - - - - - - - - - - - - 408 - - - - - - - - - - - - - - - B - - - B - - B - - - - - - - - - - - - 3672 - - 4480 - - - - - - - - 17600 11632 1008 4384 144 - 216 72 - - - - - 768 336 - 384 - - 408 5312 - - - 72 3648 - - - - - - - - - - - - 1088 - - 224 - - - - - - - - - - - 1696 2040 2664 216 - B 344 - - - - - 336 296 248 88 88 616 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 2840 - - 128 16 - 112 - - - - - 1904 2776 24 B
I want to plot numbers using palette, '-' using white color and 'B' using black color.
In gnuplot, I use log2 palette (blue -> cyan -> green -> orange -> red) and set '-' as missing data:
set palette model HSV functions 0.666*(1-gray), 1, 1
set logscale cb 2
set datafile missing "-"
plot 'matrix.dat' matrix with image
Now I can only plot numbers and '-' in desired colors. How can I plot 'B' in black color?
I solved the problem using a piecewise function. Just a small change:
set palette model HSV functions gray>0 ? 0.666*(1-gray):0, 1, gray>0 ? 1:0
then change all 'B' of the file into '0'. The idea is that using black color for 0 and using colors in palette for non-zero values. Thanks!

Windbg !heap -stat -h command How to get more than 20 entries

I am looking into a heap, which has many few allocations and number of entries are much more than 20, which is the default of !heap -stat -h command. For example, if you see below, the numbers don't add up to 100. Is there any way I can get all the entries in that heap?
!heap -stat -h 0000000006eb0000
heap # 0000000006eb0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
3a00 92e - 2146c00 (1.11)
27da8 c0 - 1de3e00 (1.00)
4fb48 5c - 1ca4de0 (0.95)
3bc78 6e - 19afb90 (0.86)
14c18 127 - 17eafa8 (0.80)
778e8 2b - 1414ef8 (0.67)
6f30 29d - 1229070 (0.61)
13ed8 a5 - cd8138 (0.43)
4c00 2a0 - c78000 (0.42)
10a18 a4 - aa7760 (0.36)
63a18 1a - a1e670 (0.34)
18e18 61 - 96d718 (0.31)
9f688 c - 778e60 (0.25)
20 3551e - 6aa3c0 (0.22)
a0 a776 - 68a9c0 (0.22)
8b7b8 b - 5fe4e8 (0.20)
1e08 2b0 - 50b580 (0.17)
30 168fc - 43af40 (0.14)
a898 60 - 3f3900 (0.13)
18 287ae - 3cb850 (0.13)
-Thanks,
Brajesh
You can increase this total by specifying the group by parameter followed by a number so for example:
!heap -stat -h 07300000 -grp A 0n100
gives output:
0:275> !heap -stat -h 07300000 -grp A 0n100
heap # 07300000 group-by: ALLOCATIONSIZE max-display: 100
size #blocks total ( %) (percent of total busy bytes)
7ecc10 1 - 7ecc10 (41.60)
1fc210 1 - 1fc210 (10.42)
1fb310 1 - 1fb310 (10.40)
17d110 1 - 17d110 (7.81)
2c4e0 2 - 589c0 (1.82)
2b330 1 - 2b330 (0.89)
20420 3 - 60c60 (1.98)
20020 4 - 80080 (2.63)
14320 1 - 14320 (0.41)
10020 1 - 10020 (0.33)
fab8 1 - fab8 (0.32)
eb4c 2 - 1d698 (0.60)
c020 1 - c020 (0.25)
9c60 4c - 2e6c80 (15.23)
82c0 3 - 18840 (0.50)
8020 3 - 18060 (0.49)
6420 1 - 6420 (0.13)
5ea0 1 - 5ea0 (0.12)
517c 1 - 517c (0.10)
4f40 1 - 4f40 (0.10)
4ba4 1 - 4ba4 (0.10)
4750 1 - 4750 (0.09)
4020 2 - 8040 (0.16)
3f78 1 - 3f78 (0.08)
2c38 1 - 2c38 (0.06)
25d8 1 - 25d8 (0.05)
21dc 1 - 21dc (0.04)
2040 1 - 2040 (0.04)
2020 3 - 6060 (0.12)
1de0 1 - 1de0 (0.04)
1da8 10 - 1da80 (0.61)
1b6c 3 - 5244 (0.11)
19f0 1 - 19f0 (0.03)
18e4 2 - 31c8 (0.06)
1890 1 - 1890 (0.03)
183c 2 - 3078 (0.06)
1820 1 - 1820 (0.03)
15e8 1 - 15e8 (0.03)
1560 1 - 1560 (0.03)
151c 2 - 2a38 (0.05)
14b0 1 - 14b0 (0.03)
1384 1 - 1384 (0.03)
1098 1 - 1098 (0.02)
102c 3 - 3084 (0.06)
1020 2 - 2040 (0.04)
101f 1 - 101f (0.02)
101c 1 - 101c (0.02)
Will dump the handles for that heap, grouped by allocation size for a max of 100 rows (0n specifies we are decimal based, without that prefix it becomes a hexidecimal value)
See this link for details of !heap

Extract data from log file in specified range of time [duplicate]

This question already has answers here:
Filter log file entries based on date range
(5 answers)
Closed 6 years ago.
I want to extract information from a log file using a shell script (bash) based on time range. A line in the log file looks like this:
172.16.0.3 - - [31/Mar/2002:19:30:41 +0200] "GET / HTTP/1.1" 200 123 "" "Mozilla/5.0 (compatible; Konqueror/2.2.2-2; Linux)"
i want to extract data specific intervals. For example I need to look only at the events which happened during the last X minutes or X days ago from the last recorded data. I'm new in shell scripting but i have tried to use grep command.
You can use sed for this. For example:
$ sed -n '/Feb 23 13:55/,/Feb 23 14:00/p' /var/log/mail.log
Feb 23 13:55:01 messagerie postfix/smtpd[20964]: connect from localhost[127.0.0.1]
Feb 23 13:55:01 messagerie postfix/smtpd[20964]: lost connection after CONNECT from localhost[127.0.0.1]
Feb 23 13:55:01 messagerie postfix/smtpd[20964]: disconnect from localhost[127.0.0.1]
Feb 23 13:55:01 messagerie pop3d: Connection, ip=[::ffff:127.0.0.1]
...
How it works
The -n switch tells sed to not output each line of the file it reads (default behaviour).
The last p after the regular expressions tells it to print lines that match the preceding expression.
The expression '/pattern1/,/pattern2/' will print everything that is between first pattern and second pattern. In this case it will print every line it finds between the string Feb 23 13:55 and the string Feb 23 14:00.
More info here
Use grep and regular expressions, for example if you want 4 minutes interval of logs:
grep "31/Mar/2002:19:3[1-5]" logfile
will return all logs lines between 19:31 and 19:35 on 31/Mar/2002.
Supposing you need the last 5 days starting from today 27/Sep/2011 you may use the following:
grep "2[3-7]/Sep/2011" logfile
well, I have spent some time on your date format.....
however, finally i worked it out..
let's take an example file (named logFile), i made it a bit short.
say, you want to get last 5 mins' log in this file:
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:20:41 +0200] "GET
### lines below are what you want (5 mins till the last record)
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:27:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:30:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:30:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:30:41 +0200] "GET
172.16.0.3 - - [31/Mar/2002:19:30:41 +0200] "GET
here is the solution:
# this variable you could customize, important is convert to seconds.
# e.g 5days=$((5*24*3600))
x=$((5*60)) #here we take 5 mins as example
# this line get the timestamp in seconds of last line of your logfile
last=$(tail -n1 logFile|awk -F'[][]' '{ gsub(/\//," ",$2); sub(/:/," ",$2); "date +%s -d \""$2"\""|getline d; print d;}' )
#this awk will give you lines you needs:
awk -F'[][]' -v last=$last -v x=$x '{ gsub(/\//," ",$2); sub(/:/," ",$2); "date +%s -d \""$2"\""|getline d; if (last-d<=x)print $0 }' logFile
output:
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:27:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:30:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:30:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:30:41 +0200 "GET
172.16.0.3 - - 31 Mar 2002 19:30:41 +0200 "GET
EDIT
you may notice that in the output the [ and ] are disappeared. If you do want them back, you can change the last awk line print $0 -> print $1 "[" $2 "]" $3
I used this command to find last 5 minutes logs for particular event "DHCPACK", try below:
$ grep "DHCPACK" /var/log/messages | grep "$(date +%h\ %d) [$(date --date='5 min ago' %H)-$(date +%H)]:*:*"
You can use this for getting current and log times:
#!/bin/bash
log="log_file_name"
while read line
do
current_hours=`date | awk 'BEGIN{FS="[ :]+"}; {print $4}'`
current_minutes=`date | awk 'BEGIN{FS="[ :]+"}; {print $5}'`
current_seconds=`date | awk 'BEGIN{FS="[ :]+"}; {print $6}'`
log_file_hours=`echo $line | awk 'BEGIN{FS="[ [/:]+"}; {print $7}'`
log_file_minutes=`echo $line | awk 'BEGIN{FS="[ [/:]+"}; {print $8}'`
log_file_seconds=`echo $line | awk 'BEGIN{FS="[ [/:]+"}; {print $9}'`
done < $log
And compare log_file_* and current_* variables.

Resources