Using some bash script I'd like to perform some kind of the analysis of big log.txt file consisted of the big numbers of strings where each of them are present in the following format
PHE 233,R PHE 233,0.0,0.0,0.0,-0.07884,0.0296770011962,0.00209848087911,0.023555,0.0757544518494,0.00535664866078,-0.065675,0.0859064571205,0.00607450383776,0.0,0.0,0.0,-0.12096,0.0486756448339,0.00344188785407
TYR 234,R TYR 234,0.0,0.0,0.0,-1.25531,0.629561517169,0.0445167217964,-0.004085,0.179779219531,0.0127123105246,0.169925,0.199097411774,0.0140783129982,-0.06675426,0.0227214659046,0.00160665026196,-1.15622426,0.59309226863,0.0419379565017
GLY 235,R GLY 235,0.0,0.0,0.0,-0.039345,0.0259211491836,0.00183290203639,-0.053115,0.0245550763591,0.00173630610061,0.098535,0.0441429357316,0.00312137691973,0.0,0.0,0.0,0.006075,0.0208364914273,0.00147336243844
THR 236,R THR 236,0.0,0.0,0.0,-0.03241,0.0100624003101,0.000711519149426,-0.115375,0.0590932684407,0.00417852508369,0.116505,0.0563931731241,0.00398759951286,0.0,0.0,0.0,-0.03128,0.0262172004608,0.00185383602295
from each of this line of log.txt I need to get and paste in new log file final_log.txt of only first, second and last terms: In the above case it would be
PHE 233 0.00344188785407
TYR 234 0.0419379565017
THR 236 0.00185383602295
!! what is most important! because typical logs are consisted of big number of the strings in new txt file I'd like to sort strings in accordance to the value of last term providing choosen threshold for them. Eventually from the log.txt I'd like to select and paste to the final_log.txt of only those strings where the numbers in last column are equal or higher than the defined threshold. I'd be very thankful for any solutions of this non-trivial (for me!) problem.
Gleb
Through awk,
awk -F'[ ,]' '{print $1" "$2" "$NF}' file
OR
$ awk -F'[ ,]' '{print $1,$2,$NF}' file
PHE 233 0.00344188785407
TYR 234 0.0419379565017
GLY 235 0.00147336243844
THR 236 0.00185383602295
Related
I need to divide my text file. In my text file, I have numbers. This is a small fragment of my input file. In my text file, I have numbers from 29026 to 58050.
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 ...........................................................
................................................58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
I must create 225 index groups. Every group must have 129 numbers. So my output will look like
[ Lipid 1 ]
29026 29027 29028 29029 ...................................
...............
...........................29150 29151 29152 29153 29154
[ Lipid 2 ]
...
...
[ Lipid 225 ]
57921 57922 57923 57924 57925 57926......
.....
.......................
58044 58045 58046 58047 58048 58049 58050
Do you have any idea?
Edit
My text file
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 29046 29047 29048 29049 29050 29051 29052 29053 29054 29055
29056 29057 29058 29059 29060 29061 29062 29063 29064 29065 29066 29067 29068 29069 29070
29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085
29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100
29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115
29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130
29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145
29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160
29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175
29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190
29191 29192 29193 29194 29195 29196 29197 29198 29199 29200 29201 29202 29203 29204 29205
29206 29207 29208 29209 29210 29211 29212 29213 29214 29215 29216 29217 29218 29219 29220
29221 29222 29223 29224 29225 29226 29227 29228 29229 29230 29231 29232 29233 29234 29235
29236 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250
29251 29252 29253 29254 29255 29256 29257 29258 29259 29260 29261 29262 29263 29264 29265
29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280
29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295
29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310
29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325
29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340
29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355
29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370
29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385
29386 29387 29388 29389 29390 29391 29392 29393 29394 29395 29396 29397 29398 29399 29400
29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415
29416 29417 29418 29419 29420 29421 29422 29423 29424 29425 29426 29427 29428 29429 29430
here I have thousands of lines, but I will not paste all of this text
57736 57737 57738 57739 57740 57741 57742 57743 57744 57745 57746 57747 57748 57749 57750
57751 57752 57753 57754 57755 57756 57757 57758 57759 57760 57761 57762 57763 57764 57765
57766 57767 57768 57769 57770 57771 57772 57773 57774 57775 57776 57777 57778 57779 57780
57781 57782 57783 57784 57785 57786 57787 57788 57789 57790 57791 57792 57793 57794 57795
57796 57797 57798 57799 57800 57801 57802 57803 57804 57805 57806 57807 57808 57809 57810
57811 57812 57813 57814 57815 57816 57817 57818 57819 57820 57821 57822 57823 57824 57825
57826 57827 57828 57829 57830 57831 57832 57833 57834 57835 57836 57837 57838 57839 57840
57841 57842 57843 57844 57845 57846 57847 57848 57849 57850 57851 57852 57853 57854 57855
57856 57857 57858 57859 57860 57861 57862 57863 57864 57865 57866 57867 57868 57869 57870
57871 57872 57873 57874 57875 57876 57877 57878 57879 57880 57881 57882 57883 57884 57885
57886 57887 57888 57889 57890 57891 57892 57893 57894 57895 57896 57897 57898 57899 57900
57901 57902 57903 57904 57905 57906 57907 57908 57909 57910 57911 57912 57913 57914 57915
57916 57917 57918 57919 57920 57921 57922 57923 57924 57925 57926 57927 57928 57929 57930
57931 57932 57933 57934 57935 57936 57937 57938 57939 57940 57941 57942 57943 57944 57945
57946 57947 57948 57949 57950 57951 57952 57953 57954 57955 57956 57957 57958 57959 57960
57961 57962 57963 57964 57965 57966 57967 57968 57969 57970 57971 57972 57973 57974 57975
57976 57977 57978 57979 57980 57981 57982 57983 57984 57985 57986 57987 57988 57989 57990
57991 57992 57993 57994 57995 57996 57997 57998 57999 58000 58001 58002 58003 58004 58005
58006 58007 58008 58009 58010 58011 58012 58013 58014 58015 58016 58017 58018 58019 58020
58021 58022 58023 58024 58025 58026 58027 58028 58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
Here is how I understood your problem:
The input is a text file in several lines, with fifteen numbers on each line, separated by spaces or tabs. Some lines (perhaps the last one) may have fewer than fifteen numbers. (In fact in the solution below it doesn't matter how many numbers are on each line.)
You must group the numbers into sets of 129 numbers each, in sequence. The last group may have less than 129 numbers, if the input cardinality is not an exact multiple of 129. In the solution below, it doesn't matter how many input numbers there are (and therefore how many groups there will be in the output).
For each group of 129 numbers, you must get a few lines in the output. First, a title or label that says [Lipid n] where n is the line number, and then the numbers in that group, shown fifteen per line (so, there will be eight full lines and a ninth line with only 9 numbers on it: 129 = 15 * 8 + 9).
Here's how you can do this. First let's start with a small example, and then we can look at what must be changed for a more general solution.
I will assume that your inputs can be arbitrary numbers of any length; of course, if they are consecutive numbers like you showed in your sample data, then the problem is trivial and completely uninteresting. So let's assume your numbers are in fact any numbers at all. (Not really; I wrote the solution for non-negative integers; but it can be re-written for "tokens" of non-blank characters separated by blanks.)
I start with the following input file:
$ cat lipid-inputs
124 150 178 111 143 177 116
154 194 139 183 132 180 133
185 142 101 159 122 184 151
120 188 161 136 113 189 170
We want to group the 28 input numbers into sets of ten numbers each, and present the output with (at most) seven numbers per line. So: There will be two full groups, and a third group with only eight member numbers (since we have only 28 inputs). The desired output looks like this:
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
Strategy: First write the input numbers one per line, so we can then arrange them ten per line (ten: cardinality of desired groups in the output). Then add line numbers (which will go into the label lines). Then edit the "line number" lines to add the "lipid" stuff, and break the data lines into shorter lines, showing seven tokens each (possibly fewer on the last line in each group).
Implementation: tr to break up the tokens one per line; paste reading repeatedly from standard input, ten stdin lines for each output line; then sed = to add the line numbers (on separate lines); and finally a standard sed for the final editing. The command looks like this:
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' - - - - - - - - - - |
> sed = | sed -E 's/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){7}) /\1\n/g'
The output is the one I showed already.
To generalize (so you can apply to your problem): The number of tokens per line in the input file is irrelevant. To get 15 tokens per line in the output, change the hard-coded number 7 to 15 on the last line in the command shown above. And to allocate 129 tokens per line, instead of 10, what needs to be changed is the paste command: I show it reading ten times from stdin. You need 129. So it would be better to create a string of 129 dashes separated by space, in a simple command - rather than hard-coding - and to use that string as an input to paste. I show how to do this for my example, you will adapt for yours.
Define variables to hold your relevant values: how many tokens per lipid (129 in your case, 10 in mine) and how many tokens per line in the output (15 in your case, 7 in mine).
$ tokens_per_lipid=10
$ tokens_per_line=7
Then create a variable to hold the string - - - - [...] needed in the paste command. There are several ways to do this, here's just one:
$ paste_arg=$(yes '-' | head -n $tokens_per_lipid | tr '\n' ' ')
Let's check it:
$ echo $paste_arg
- - - - - - - - - -
OK, so let's re-write the command that does what you need. We must use double-quotes for the argument to sed to allow variable expansion.
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' $paste_arg |
> sed = | sed -E "s/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){$tokens_per_line}) /\1\n/g"
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
I have no clue what you really are trying to do, but maybe this does what you want
< input sed -zE 's/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
It uses Sed to insert the [ Lipid # ] string (with some newline) every 129 occurrences of [0-9]+[^0-9]+ (which is 1 or more digits followed by 1 or more non-digits); then it uses Awk to substitute # with numbers from one (to do so, it interprets the ] as the record separator, and so it can change # to the number of the record NR); finally it uses Sed again to remove the last line which appears as the last record separator from the Awk processing.
I used Awk for inserting the increasing numbers as there's no easy way to do maths in Sed; I used Sed to break the file and insert text in between as requested as I find it easier than doing it in Awk.
If you need to have all numbers on one line in the output, you can do
< input sed -zE 's/[^0-9]+/ /g;s/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
where I have just added s/[^0-9]+/ /g; to collapse whatever happens to be between numbers to a single whitespace.
can someone help me finding a code for copying all the strings between the string 'X 0' (X = H, He...) and the nearest '****'? I use bash for programming.
H 0
S 3 1.00 0.000000000000
0.1873113696D+02 0.3349460434D 01
0.2825394365D+01 0.2347269535D+00
0.6401216923D+00 0.8137573261D+00
S 1 1.00 0.000000000000
0.1612777588D+00 0.1000000000D+01
****
He 0
S 3 1.00 0.000000000000
0.3842163400D+02 0.4013973935D 01
0.5778030000D+01 0.2612460970D+00
0.1241774000D+01 0.7931846246D+00
S 1 1.00 0.000000000000
0.2979640000D+00 0.1000000000D+01
****
I want to do this for all the "X 0" (X = H, He...) especifically, obtaining a isolated text like that, for all the "X 0":
H 0
S 3 1.00 0.000000000000
0.1873113696D+02 0.3349460434D 01
0.2825394365D+01 0.2347269535D+00
0.6401216923D+00 0.8137573261D+00
S 1 1.00 0.000000000000
0.1612777588D+00 0.1000000000D+01
****
and
He 0
S 3 1.00 0.000000000000
0.3842163400D+02 0.4013973935D 01
0.5778030000D+01 0.2612460970D+00
0.1241774000D+01 0.7931846246D+00
S 1 1.00 0.000000000000
0.2979640000D+00 0.1000000000D+01
****
So I think I have to find a way to do it using the string containing "X 0".
I was trying to use grep -A2000 'H 0' filename.txt | grep -B2000 -m8 '****' filename.txt >> filenameH.txt but its not so usefull for the other exemples of X, just for the first.
Using awk:
awk '/^[^ ]+ 0$/{p=1;++c}/^\*\*\*\*$/{print >>FILENAME c;p=0}p{print >> FILENAME c}' file
The script creates as many files as there are blocks matching the the patterns /^[^ ]+ 0$/ and /^\*\*\*\*$/. The file index starts at 1.
if the records are separated with 4 stars. Needs gawk
$ awk -v RS='\\*\\*\\*\\*\n' '$1~/^He?$/{printf "%s", $0 RT > FILENAME $1}' file
this will only extract H and He records. If you don't want to restrict, just remove the condition before the curly brace. (Equivalent to $1=="H" || $1=="He")
I am trying to change numbers in a long list by other numbers. For example,
cat inputfile.txt
A 254 B 456 C 546
D 548 E 548 F 458
A 244 B 416 C 566
D 148 E 558 F 428
And I want to change B's value by adding a percentage on it. For example I want to increase the B in the first array by 3% and b in the next one by 2 % as following:
cat inputfile.txt
A 254 B 469.68 C 546
D 548 E 548 F 458
A 244 B 424.32 C 566
D 148 E 558 F 428
I tried the following but it didn't work.
a=(456 416)
b= (469.68 424.32)
for i in ${a[#]};
for j in ${b[#]}; do
sed -i -- "s/${i}/${j}" inputfile.txt
done
There are multiple problems.
cat inputfile.txt
A 254 B 456 C 546
D 548 E 548 F 458
A 244 B 416 C 566
D 148 E 558 F 428
a=(456 416)
b= (469.68 424.32)
for i in ${a[#]};
for j in ${b[#]}; do
sed -i -- "s/${i}/${j}" inputfile.txt
done
a)
Your assignment to b fails. There is no space allowed around assignment.
b) The substitution pattern isn't closed:
sed -i -- "s/${i}/${j}/" inputfile.txt
c) Now it should run with values not empty, but this will try to replace 456 with 469.68 and then with 424.32. This can't work, since the value is already changed to 469.68. Then it will try to change every 416 with both values.
a=(456 416)
b=(469.68 424.32)
for i in ${a[#]};
for j in ${b[#]}; do
sed "s/${i}/${j}/" inputfile.txt
done
You have two corresponding values which need to be in sync, because you want to replace the first by the second. So you have to iterate once, and by the index:
max=${#a[#]}
for i in $(seq 0 $((max - 1))); do
sed "s/${a[$i]}/${b[$i]}/" inputfile.txt
done
I removed the -i for testing from sed.
The last problem is, that there might be number collisions, for example replacing 456 in 1456 with 469.68 or 416 in 416.02 with 424.32.
To prevent this from happening we can put a blank before the number to match and a boundary for matching blank or line end in the end:
sed "s/ ${a[$i]}\b/ ${b[$i]}/" inputfile.txt
The \b-notation has the advantage (over [ $]) that we don't need to catch it, to push it back in the values, it is non-consuming.
a=(456 416)
b=(469.68 424.32)
max=${#a[#]}
for i in $(seq 0 $((max - 1))); do
sed -i "s/ ${a[$i]}\b/ ${b[$i]}/" inputfile.txt
done
I don't know the source of your a and b values - maybe my following reasoning doesn't apply, but the storage of as and bs in arrays seems not optimal. They have to be of same length but aren't guaranteed to be. You may test it, but if one value gets lost, it's hard to find out where it was, to remove the corresponding b-value.
Not combining all originals and all replacements, binding pairs seems the better idea:
a=(456 469.68)
b=(416 424.32)
But that's not far from the final sed-expression, which would be:
a="s/ 456\b/ 469.68/"
b="s/ 416\b/ 424.32/"
Now that's a bit more verbose, but we save the loop complete:
sed -i "${a};${b}" inputfile.txt
and the input file has only to be read once, and now can be testet without -i in complete.
If you happen to have mass data, you can just generate a file like that:
s/ 456\b/ 469.68/
s/ 416\b/ 424.32/
and name it numcorrect.sed, and call it by:
sed -f numcorrect.sed inputfile.txt
There are a lot of questions on data processing on a CSV file. But all are specific.
I have comma separated CSV file. I have already done required operations but there is one step which i am still stuck at.
Please note i am looking to make this change using Shell Script. 'AWK' or 'SED' might help me but i lack the knowledge of correct syntax for this.
Input:
Index,SrNo,Name,Desc,Target,Strength
1,125,RX,Big,NULL,236
2,246,DMT,Med,NULL,548
3,425,VT,SML,NULL,461
4,512,RX,Big,NULL,415
5,951,VT,SML,NULL,243
6,426,DMT,Med,NULL,412
I want to change the value of column 'Target' from NULL to 'ACTIVE' if The column 'NAME' is either 'RX' or 'DMT'.
Below is the expected output.
Index,SrNo,Name,Desc,Target,Strength
1,125,RX,Big,Active,236
2,246,DMT,Med,Active,548
3,425,VT,SML,NULL,461
4,512,RX,Big,Active,415
5,951,VT,SML,NULL,243
6,426,DMT,Med,Active,412
Assuming your input is comma delimited as the question says, you can use this awk:
awk 'BEGIN{FS=OFS=","} $3 ~ /^(RX|DMT)$/{$5 = "ACTIVE"} 1' file.csv
Index,SrNo,Name,Desc,Target,Strength
1,125,RX,Big,Active,236
2,246,DMT,Med,Active,548
3,425,VT,SML,NULL,461
4,512,RX,Big,Active,415
5,951,VT,SML,NULL,243
6,426,DMT,Med,Active,412
To get formatted output use column:
awk 'BEGIN{FS=OFS=","} $3 ~ /^(RX|DMT)$/{$5 = "ACTIVE"} 1' file.csv |
column -s, -t
Index SrNo Name Desc Target Strength
1 125 RX Big Active 236
2 246 DMT Med Active 548
3 425 VT SML NULL 461
4 512 RX Big Active 415
5 951 VT SML NULL 243
6 426 DMT Med Active 412
I have a folder which has files with the following contents.
ATOM 9 CE1 PHE A 1 70.635 -26.989 98.805 1.00 39.17 C
ATOM 10 CE2 PHE A 1 69.915 -26.416 100.989 1.00 42.21 C
ATOM 11 CZ PHE A 1 -69.816 26.271 -99.622 1.00 40.62 C
ATOM 12 N PRO A 2 -69.795 30.848 101.863 1.00 44.44 N
In some files, the appearance of the 7th column as follows.
ATOM 9 CE1 PHE A 1 70.635-26.989 98.805 1.00 39.17 C
ATOM 10 CE2 PHE A 1 69.915-26.416 100.989 1.00 42.21 C
ATOM 11 CZ PHE A 1 -69.816-26.271 -99.622 1.00 40.62 C
ATOM 12 N PRO A 2 -69.795-30.848 101.863 1.00 44.44 N
I would like to extract the name of files which have the above type of lines. What is the easy way to do this?
by refering to Erik E. Lorenz answer
you can simply do
grep -l '\s-\?[0-9.]\+-[0-9.]\+\s' dir/*
from grep manpage
-l
(The letter ell.) Write only the names of files containing selected
lines to standard output. Pathnames are written once per file searched.
If the standard input is searched, a pathname of (standard input) will
be written, in the POSIX locale. In other locales, standard input may be
replaced by something more appropriate in those locales.
A combination of grep and cut works for me:
grep -H -m 1 '\s-\?[0-9.]\+-[0-9.]\+\s' dir/* | cut -d: -f1
This performs the following steps:
for every file in dir/*, find the first match (-m 1) of two adjacent numbers separated by only a dash
print it with the filename prepended (-H). Should be the default anyway.
extract the file name using cut
This is fast since it only looks for the first line match. If there's other places with two adjacent numbers, consider changing the regex.
Edit:
This doesn't match scientific notation and may falsely report contents such as '.-.', for example in comments. If you're dealing with one of them, you have to expand the regex.
awk 'NF > 10 && $1 ~ /^[[:upper:]]+$/ && $2 ~ /^[[:digit:]]+/ { print FILENAME; nextfile }' *
Will print files that have more than 10 fields in which first field is all uppercase letters and second field is all digits.
Using GNU awk for nextfile:
awk '$7 ~ /[0-9]-[0-9]/{print FILENAME; nextfile}' *
or more efficiently since you just need to test the first line of each file if all lines in a given file have the same format:
awk 'FNR==1{if ($7 ~ /[0-9]-[0-9]/) print FILENAME; nextfile}' *