Cut text into two string using string delimiter - bash - bash

I want to cut string into two string using an string
CH 7 ][ Elapsed: 0 s ][ 2021-11-27 12:55
BSSID PWR Beacons #Data, #/s CH MB ENC CIPHER AUTH ESSID
EE:EE:EE:EE:EE:EE -82 3 0 0 6 130 WPA2 CCMP PSK Tenda
FF:FF:FF:FF:FF:FF -90 4 0 0 1 130 WPA2 CCMP PSK Wifi
BSSID STATION PWR Rate Lost Frames Notes Probes
EE:EE:EE:EE:EE:EE AA:AA:AA:AA:AA:AA -63 0 - 1e 0 3
EE:EE:EE:EE:EE:EE BB:BB:BB:BB:BB:BB -74 0 - 1 0 1
I want to cut my text using this delimiter BSSID STATION PWR Rate Lost Frames Notes Probes I try with awk -F 'BSSID' '{print $1}' file but it cut all occurrence, I want to cut only last occurrence.
desired output :
CH 7 ][ Elapsed: 0 s ][ 2021-11-27 12:55
BSSID PWR Beacons #Data, #/s CH MB ENC CIPHER AUTH ESSID
EE:EE:EE:EE:EE:EE -82 3 0 0 6 130 WPA2 CCMP PSK Tenda
FF:FF:FF:FF:FF:FF -90 4 0 0 1 130 WPA2 CCMP PSK Wifi

awk '/BSSID STATION PWR Rate Lost Frames Notes Probes/{exit} 1' file

Related

Writing A'B'CD+ABC' using two inverters and 5 2:1 multiplexers

The question says draw F(A,B,C,D)=∑(3,7,11,12,13). I derived A'B'CD+ABC'. I am trying to draw it using two inverters and 5 2:1 multiplexers but i couldn't connect the output to the separate components i wrote. I know the correct answer but i just couldn't understand it.
Here's the correct solution
Why is the last mux connected to the 0 instead of 1 like we did all the other components? And why did they give 1 to 1 in mux in the answer?
OK, then maybe this will help:
F(A,B,C,D)=∑(3,7,11,12,13).
w/ 2 nots; 5 2/1 muxs
truth table
ABCD R
0000 0
0001 0
0010 0
0011 1
0100 0
0101 0
0110 0
0111 1
1000 0
1001 0
1010 0
1011 1
1100 1
1101 1
1110 0
1111 0
kmap
\ CD 00 01 11 10
AB \
00 0 0 1 0
01 0 0 1 0
11 1 1 0 0
10 0 0 1 0
expression
ABC'+A'CD+B'CD
simplifying
ABC'+(A'+B')CD
ABC'+(A'+B')''CD
ABC'+(AB)'CD
(AB)'CD + ABC'
aux truth table:
(AB)'CD ABC' ((AB)'CD + ABC')
0 0 0 see note 2
0 1 1 see note 1
1 0 1 see note 2
1 1 1 see note 1
note 1: If ABC' is true (mux select is 1) then output is true (mux's 1 input is set to 1)
note 2: If ABC' is false (mux select is 0) then output is (AB)'CD (mux's 0 input is set to (AB)'CD), the "see note 2" outputs are true only when (AB)'CD is true

How to merge files depending on a string in a specific column

I have two files that I need to merge together based on what string they contain in a specific column.
File 1 looks like this:
1 1655 1552 189
1 1433 1552 185
1 1623 1553 175
1 691 1554 182
1 1770 1554 184
1 1923 1554 182
1 1336 1554 181
1 660 1592 179
1 743 1597 179
File 2 looks like this:
1 1552 0 0 2 -9 G A A A
1 1553 0 0 2 -9 A A G A
1 1554 0 751 2 -9 A A A A
1 1592 0 577 1 -9 G A A A
1 1597 0 749 2 -9 A A G A
1 1598 0 420 1 -9 A A A A
1 1600 0 0 1 -9 A A G G
1 1604 0 1583 1 -9 A A A A
1 1605 0 1080 2 -9 G A A A
I am wanting to match column 3 from file 1 to column 2 on file 2, with my output looking like:
1 1655 1552 189 0 0 2 -9 G A A A
1 1433 1552 185 0 0 2 -9 G A A A
1 1623 1553 175 0 0 2 -9 A A G A
1 691 1554 182 0 751 2 -9 A A A A
1 1770 1554 184 0 751 2 -9 A A A A
1 1923 1554 182 0 751 2 -9 A A A A
1 1336 1554 181 0 751 2 -9 A A A A
1 660 1592 179 0 577 1 -9 G A A A
1 743 1597 179 0 749 2 -9 A A G A
I am not interested in keeping any lines in file 2 that are not in file 1. Thanks in advance!
Thanks to #Abelisto I managed to figure something out 4 hours later!
sort -k 3,3 File1.txt > Pheno1.txt
awk '($2 >0)' File2.ped > Ped1.ped
sort -k 2,2 Ped1.ped > Ped2.ped
join -1 3 -2 2 Pheno1.txt Ped2.ped > Ped3.txt
cut -d ' ' -f 1,4,5 --complement Ped3.txt > Output.ped
My real File2 actually contained negative values in the 2nd column (thankfully my real File1 didn't have any negatives) hence the use of awk to remove those rows
Using awk:
awk 'NR == FNR { arr[$2]=$3" "$4" "$5" "$6" "$7" "$8" "$9" "$10 } NR != FNR { print $1" "$2" "$3" "$4" "arr[$3] }' file2 file1
Process file2 first (NR==FNR) Set up an array called arr with the 3rd space delimited field as the index and the 3rd to 10th fields as values separated with a space. Then when processing the first file (NR!=FNR) print the 1st to the 4th space delimited fields followed by the contents of arr, index field 3.
Since $1 seems like constant 1 and I have no idea about rowcounts of either file (800,000 columns in file2 sounded a lot) I'm hashing file1 instead:
$ awk '
NR==FNR {
a[$3]=a[$3] (a[$3]==""?"":ORS) $2 OFS $3 OFS $4
next
}
($2 in a) {
n=split(a[$2],t,ORS)
for(i=1;i<=n;i++) {
$2=t[i]
print
}
}' file1 file2
Output:
1 1655 1552 189 0 0 2 -9 G A A A
1 1433 1552 185 0 0 2 -9 G A A A
1 1623 1553 175 0 0 2 -9 A A G A
1 691 1554 182 0 751 2 -9 A A A A
1 1770 1554 184 0 751 2 -9 A A A A
1 1923 1554 182 0 751 2 -9 A A A A
1 1336 1554 181 0 751 2 -9 A A A A
1 660 1592 179 0 577 1 -9 G A A A
1 743 1597 179 0 749 2 -9 A A G A
When posting a question, please add details such as row and column counts to it. Better requirements yield better answers.

SNMP hrStorageIndex may changed sometimes. How to identify a disk in SNMP?

hrStorageIndex and ifIndex may changed after reboot sometimes.
How to identify a specific disk and network interface in SNMP, both under Linux and windows?
There are columns for hrStorageDescr and hrStorageType in the HOST-RESOURCES-MIB::hrStorageTable table.
Here is an example ...
snmptable -M +. -m +ALL -v 2c -Ci -c public -Pu myhost HOST-RESOURCES-MIB::hrStorageTable
SNMP table: HOST-RESOURCES-MIB::hrStorageTable
index hrStorageIndex hrStorageType hrStorageDescr hrStorageAllocationUnits hrStorageSize hrStorageUsed hrStorageAllocationFailures
1 1 HOST-RESOURCES-TYPES::hrStorageRam Physical memory 1024 Bytes 8057980 7268792 ?
3 3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Virtual memory 1024 Bytes 18347124 7687064 ?
6 6 HOST-RESOURCES-TYPES::hrStorageOther Memory buffers 1024 Bytes 8057980 124288 ?
7 7 HOST-RESOURCES-TYPES::hrStorageOther Cached memory 1024 Bytes 2366160 2366160 ?
10 10 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Swap space 1024 Bytes 10289144 418272 ?
31 31 HOST-RESOURCES-TYPES::hrStorageFixedDisk / 4096 Bytes 12901535 11461911 ?
35 35 HOST-RESOURCES-TYPES::hrStorageFixedDisk /dev/shm 4096 Bytes 1007247 0 ?
36 36 HOST-RESOURCES-TYPES::hrStorageFixedDisk /boot 1024 Bytes 495844 100151 ?
37 37 HOST-RESOURCES-TYPES::hrStorageFixedDisk /home 4096 Bytes 44531330 5981531 ?
Same principle for IF-MIB::ifTable which has a ifDescr column ...
snmptable -M +. -m +ALL -v 2c -Ci -c public -Pu myhost IF-MIB::ifTable
SNMP table: IF-MIB::ifTable
index ifIndex ifDescr ifType ifMtu ifSpeed ifPhysAddress ifAdminStatus ifOperStatus ifLastChange ifInOctets ifInUcastPkts ifInNUcastPkts ifInDiscards ifInErrors ifInUnknownProtos ifOutOctets ifOutUcastPkts ifOutNUcastPkts ifOutDiscards ifOutErrors ifOutQLen ifSpecific
1 1 lo softwareLoopback 16436 10000000 up up 0:0:00:00.00 723382401 729363414 0 0 0 0 723382401 729363414 0 0 0 0 SNMPv2-SMI::zeroDotZero
2 2 eth0 ethernetCsmacd 1500 1000000000 0:21:5e:4d:15:b7 up up 0:0:00:00.00 1030103587 37542077 3449194 0 0 0 1570760541 32130390 0 0 0 0 SNMPv2-SMI::zeroDotZero
3 3 eth1 ethernetCsmacd 1500 0 0:21:5e:4d:15:b8 down down 0:0:00:00.00 0 0 0 0 0 0 0 0 0 0 0 0 SNMPv2-SMI::zeroDotZero

How are floppy disk sectors numbered

I was wondering how are floppy disk sectors ordered, I am currently writing a program to access the root directory of a floppy disk (fat12 formated High Density), I can load it with debug at sector 13h but in assembly it is at head 1 track 0 sector 2 why is sector 13h, not at head 0 track 1 sector 1?
That's because the sectors on the other side of the disk comes before the sectors on the second track on the first side.
Sectors 0 through 17 (11h) are found at head 0 track 0. Sectors 18 (12h) through 35 (23h) are found at head 1 track 0.
Logical sectors are numbered from zero up, but the sectors in a track are numbered from 1 to 18 (12h).
sector# head track sector usage
------- ---- ----- ------ --------
0 0h 0 0 1 1h boot
1 1h 0 0 2 2h FAT 1
2 2h 0 0 3 3h |
3 3h 0 0 4 4h v
4 4h 0 0 5 5h
5 5h 0 0 6 6h
6 6h 0 0 7 7h
7 7h 0 0 8 8h
8 8h 0 0 9 9h
9 9h 0 0 10 ah
10 ah 0 0 11 bh FAT 2
11 bh 0 0 12 ch |
12 ch 0 0 13 dh v
13 dh 0 0 14 eh
14 eh 0 0 15 fh
15 fh 0 0 16 10h
16 10h 0 0 17 11h
17 11h 0 0 18 12h
18 12h 1 0 1 1h
19 13h 1 0 2 2h root
20 14h 1 0 3 3h |
21 15h 1 0 4 4h v
...

Go routine performance maximizing

I writing a data mover in go. Taking data located in one data center and moving it to another data center. Figured go would be perfect for this given the go routines.
I notice if I have one program running 1800 threads the amount of data being transmitted is really low
here's the dstat print out averaged over 30 seconds
---load-avg--- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
1m 5m 15m |usr sys idl wai hiq siq| read writ| recv send| in out | int csw
0.70 3.58 4.42| 10 1 89 0 0 0| 0 156k|7306k 6667k| 0 0 | 11k 6287
0.61 3.28 4.29| 12 2 85 0 0 1| 0 6963B|8822k 8523k| 0 0 | 14k 7531
0.65 3.03 4.18| 12 2 86 0 0 1| 0 1775B|8660k 8514k| 0 0 | 13k 7464
0.67 2.81 4.07| 12 2 86 0 0 1| 0 1638B|8908k 8735k| 0 0 | 13k 7435
0.67 2.60 3.96| 12 2 86 0 0 1| 0 819B|8752k 8385k| 0 0 | 13k 7445
0.47 2.37 3.84| 11 2 86 0 0 1| 0 2185B|8740k 8491k| 0 0 | 13k 7548
0.61 2.22 3.74| 10 2 88 0 0 0| 0 1229B|7122k 6765k| 0 0 | 11k 6228
0.52 2.04 3.63| 3 1 97 0 0 0| 0 546B|1999k 1365k| 0 0 |3117 2033
If I run 9 instances of the program with 200 threads each I see much better performance
---load-avg--- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
1m 5m 15m |usr sys idl wai hiq siq| read writ| recv send| in out | int csw
8.34 9.56 8.78| 53 8 36 0 0 3| 0 410B| 38M 32M| 0 0 | 41k 26k
8.01 9.37 8.74| 74 10 12 0 0 4| 0 137B| 51M 51M| 0 0 | 59k 39k
8.36 9.31 8.74| 75 9 12 0 0 4| 0 1092B| 51M 51M| 0 0 | 59k 39k
6.93 8.89 8.62| 74 10 12 0 0 4| 0 5188B| 50M 49M| 0 0 | 59k 38k
7.09 8.73 8.58| 75 9 12 0 0 4| 0 410B| 51M 50M| 0 0 | 60k 39k
7.40 8.62 8.54| 75 9 12 0 0 4| 0 137B| 52M 49M| 0 0 | 61k 40k
7.96 8.63 8.55| 75 9 12 0 0 4| 0 956B| 51M 51M| 0 0 | 59k 39k
7.46 8.44 8.49| 75 9 12 0 0 4| 0 273B| 51M 50M| 0 0 | 58k 38k
8.08 8.51 8.51| 75 9 12 0 0 4| 0 410B| 51M 51M| 0 0 | 59k 39k
load average is a little high but I'll worry about that later. The network traffic though is almost hitting the network potential.
I'm on Ubuntu 12.04,
8 Gigs Ram,
2.3 GHz processors (says EC2 :P)
Also, I've increased my file descriptors from 1024 to 10240
I thought go was designed for this kind of thing or am I expecting too much of go for this application?
Is there something trivial that I'm missing? Do I need to configure my system to maximizes go's potential?
EDIT
I guess my question wasn't clear enough. Sorry. I'm not asking for magic from go, I know the computers have limitations to what they can handle.
So I'll rephrase. Why is 1 instance with 1800 go routines != 9 instances with 200 threads each? Same amount of go routines significantly less performance for 1 instance compared to 9 instances.
Please note, that goroutines are also limited to your local maschine and that channels are not natively network enabled, i.e. your particular case is probably not biting go's chocolate site.
Also: What did you expect from throwing (suposedly) every transfer into a goroutine? IO-Operations tend to have their bottleneck where the bits hit the metal, i.e. the physical transfer of the data to the medium. Think of it like that: No matter how many Threads or (Goroutines in this case) try to write to Networkcard, you still only have one Networkcard. Most likely hitting it with to many concurrent write calls will only slow things down, since the involved overhead increases
If you think this is not the problem or want to audit your code for optimized performance, go has neat builtin features to do so: profiling go programs (official go blog)
But still the actual bottleneck might well be outside your go program AND/OR in the way it interacts with the os.
Adressing your actual problem without code is pointless guessing. Post some and everyone will try their best to help you.
You will probably have to post your source code to get any real input, but just to be sure, you have increased number of cpus to use?
import "runtime"
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
}

Resources