I need to sum up values from 100 files. This is part of my input
suma_wiazan_wodorowych_2_1.txt
2536
1928
1830
1774
1732
1673
1620
suma_wiazan_wodorowych_2_101.txt (name for every file is changing by 100, so 1, 101, 201 etc)
2535
1987
1895
1829
1805
1714
1657
So my script should add first row from the first file first row from the second file .... to one hundred
2535+2536+..+..+2621
And against the second row from the first + second row from the second file etc.
The length of every file is 5000 rows (so I will have 5000 sums)
Do you have any idea?
A one-liner using pasteand bc
paste -d + suma_wiazan_wodorowych_2_* | bc
assuming the lines contain only bare numbers without a leading + (negative numbers, that are, numbers with a single leading -, are ok), and the files have equal number of lines.
with awk
$ awk '{sum[FNR]+=$1} END{for(i=1;i<=FNR;i++) print sum[i]}' file*
sum all corresponding values from all input files, print at the end.
Related
I need to divide my text file. In my text file, I have numbers. This is a small fragment of my input file. In my text file, I have numbers from 29026 to 58050.
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 ...........................................................
................................................58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
I must create 225 index groups. Every group must have 129 numbers. So my output will look like
[ Lipid 1 ]
29026 29027 29028 29029 ...................................
...............
...........................29150 29151 29152 29153 29154
[ Lipid 2 ]
...
...
[ Lipid 225 ]
57921 57922 57923 57924 57925 57926......
.....
.......................
58044 58045 58046 58047 58048 58049 58050
Do you have any idea?
Edit
My text file
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 29046 29047 29048 29049 29050 29051 29052 29053 29054 29055
29056 29057 29058 29059 29060 29061 29062 29063 29064 29065 29066 29067 29068 29069 29070
29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085
29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100
29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115
29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130
29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145
29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160
29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175
29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190
29191 29192 29193 29194 29195 29196 29197 29198 29199 29200 29201 29202 29203 29204 29205
29206 29207 29208 29209 29210 29211 29212 29213 29214 29215 29216 29217 29218 29219 29220
29221 29222 29223 29224 29225 29226 29227 29228 29229 29230 29231 29232 29233 29234 29235
29236 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250
29251 29252 29253 29254 29255 29256 29257 29258 29259 29260 29261 29262 29263 29264 29265
29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280
29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295
29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310
29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325
29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340
29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355
29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370
29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385
29386 29387 29388 29389 29390 29391 29392 29393 29394 29395 29396 29397 29398 29399 29400
29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415
29416 29417 29418 29419 29420 29421 29422 29423 29424 29425 29426 29427 29428 29429 29430
here I have thousands of lines, but I will not paste all of this text
57736 57737 57738 57739 57740 57741 57742 57743 57744 57745 57746 57747 57748 57749 57750
57751 57752 57753 57754 57755 57756 57757 57758 57759 57760 57761 57762 57763 57764 57765
57766 57767 57768 57769 57770 57771 57772 57773 57774 57775 57776 57777 57778 57779 57780
57781 57782 57783 57784 57785 57786 57787 57788 57789 57790 57791 57792 57793 57794 57795
57796 57797 57798 57799 57800 57801 57802 57803 57804 57805 57806 57807 57808 57809 57810
57811 57812 57813 57814 57815 57816 57817 57818 57819 57820 57821 57822 57823 57824 57825
57826 57827 57828 57829 57830 57831 57832 57833 57834 57835 57836 57837 57838 57839 57840
57841 57842 57843 57844 57845 57846 57847 57848 57849 57850 57851 57852 57853 57854 57855
57856 57857 57858 57859 57860 57861 57862 57863 57864 57865 57866 57867 57868 57869 57870
57871 57872 57873 57874 57875 57876 57877 57878 57879 57880 57881 57882 57883 57884 57885
57886 57887 57888 57889 57890 57891 57892 57893 57894 57895 57896 57897 57898 57899 57900
57901 57902 57903 57904 57905 57906 57907 57908 57909 57910 57911 57912 57913 57914 57915
57916 57917 57918 57919 57920 57921 57922 57923 57924 57925 57926 57927 57928 57929 57930
57931 57932 57933 57934 57935 57936 57937 57938 57939 57940 57941 57942 57943 57944 57945
57946 57947 57948 57949 57950 57951 57952 57953 57954 57955 57956 57957 57958 57959 57960
57961 57962 57963 57964 57965 57966 57967 57968 57969 57970 57971 57972 57973 57974 57975
57976 57977 57978 57979 57980 57981 57982 57983 57984 57985 57986 57987 57988 57989 57990
57991 57992 57993 57994 57995 57996 57997 57998 57999 58000 58001 58002 58003 58004 58005
58006 58007 58008 58009 58010 58011 58012 58013 58014 58015 58016 58017 58018 58019 58020
58021 58022 58023 58024 58025 58026 58027 58028 58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
Here is how I understood your problem:
The input is a text file in several lines, with fifteen numbers on each line, separated by spaces or tabs. Some lines (perhaps the last one) may have fewer than fifteen numbers. (In fact in the solution below it doesn't matter how many numbers are on each line.)
You must group the numbers into sets of 129 numbers each, in sequence. The last group may have less than 129 numbers, if the input cardinality is not an exact multiple of 129. In the solution below, it doesn't matter how many input numbers there are (and therefore how many groups there will be in the output).
For each group of 129 numbers, you must get a few lines in the output. First, a title or label that says [Lipid n] where n is the line number, and then the numbers in that group, shown fifteen per line (so, there will be eight full lines and a ninth line with only 9 numbers on it: 129 = 15 * 8 + 9).
Here's how you can do this. First let's start with a small example, and then we can look at what must be changed for a more general solution.
I will assume that your inputs can be arbitrary numbers of any length; of course, if they are consecutive numbers like you showed in your sample data, then the problem is trivial and completely uninteresting. So let's assume your numbers are in fact any numbers at all. (Not really; I wrote the solution for non-negative integers; but it can be re-written for "tokens" of non-blank characters separated by blanks.)
I start with the following input file:
$ cat lipid-inputs
124 150 178 111 143 177 116
154 194 139 183 132 180 133
185 142 101 159 122 184 151
120 188 161 136 113 189 170
We want to group the 28 input numbers into sets of ten numbers each, and present the output with (at most) seven numbers per line. So: There will be two full groups, and a third group with only eight member numbers (since we have only 28 inputs). The desired output looks like this:
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
Strategy: First write the input numbers one per line, so we can then arrange them ten per line (ten: cardinality of desired groups in the output). Then add line numbers (which will go into the label lines). Then edit the "line number" lines to add the "lipid" stuff, and break the data lines into shorter lines, showing seven tokens each (possibly fewer on the last line in each group).
Implementation: tr to break up the tokens one per line; paste reading repeatedly from standard input, ten stdin lines for each output line; then sed = to add the line numbers (on separate lines); and finally a standard sed for the final editing. The command looks like this:
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' - - - - - - - - - - |
> sed = | sed -E 's/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){7}) /\1\n/g'
The output is the one I showed already.
To generalize (so you can apply to your problem): The number of tokens per line in the input file is irrelevant. To get 15 tokens per line in the output, change the hard-coded number 7 to 15 on the last line in the command shown above. And to allocate 129 tokens per line, instead of 10, what needs to be changed is the paste command: I show it reading ten times from stdin. You need 129. So it would be better to create a string of 129 dashes separated by space, in a simple command - rather than hard-coding - and to use that string as an input to paste. I show how to do this for my example, you will adapt for yours.
Define variables to hold your relevant values: how many tokens per lipid (129 in your case, 10 in mine) and how many tokens per line in the output (15 in your case, 7 in mine).
$ tokens_per_lipid=10
$ tokens_per_line=7
Then create a variable to hold the string - - - - [...] needed in the paste command. There are several ways to do this, here's just one:
$ paste_arg=$(yes '-' | head -n $tokens_per_lipid | tr '\n' ' ')
Let's check it:
$ echo $paste_arg
- - - - - - - - - -
OK, so let's re-write the command that does what you need. We must use double-quotes for the argument to sed to allow variable expansion.
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' $paste_arg |
> sed = | sed -E "s/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){$tokens_per_line}) /\1\n/g"
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
I have no clue what you really are trying to do, but maybe this does what you want
< input sed -zE 's/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
It uses Sed to insert the [ Lipid # ] string (with some newline) every 129 occurrences of [0-9]+[^0-9]+ (which is 1 or more digits followed by 1 or more non-digits); then it uses Awk to substitute # with numbers from one (to do so, it interprets the ] as the record separator, and so it can change # to the number of the record NR); finally it uses Sed again to remove the last line which appears as the last record separator from the Awk processing.
I used Awk for inserting the increasing numbers as there's no easy way to do maths in Sed; I used Sed to break the file and insert text in between as requested as I find it easier than doing it in Awk.
If you need to have all numbers on one line in the output, you can do
< input sed -zE 's/[^0-9]+/ /g;s/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
where I have just added s/[^0-9]+/ /g; to collapse whatever happens to be between numbers to a single whitespace.
I have two txt files: File1 is a tsv with 9 columns. Following is its first row (SRR6691737.359236/0_14228//11999_12313 is the first column and after Repeat is the 9th column):
SRR6691737.359236/0_14228//11999_12313 Censor repeat 5 264 1169 + . Repeat BOVA2 SINE 1 260 9
File2 is a tsv with 9 columns. Following is its first row (after Read is the 9th column):
CM011822.1 reefer discordance 63738705 63738727 . + . Read SRR6691737.359236 11999 12313; Dup 277
File1 contains information of read name (SRR6691737.359236), read length (0_14228) and coordinates (11999_12313) while file two contains only read name and coordinate. All read names and coordinates in file1 are present in file2, but file2 may also contain the same read names with different coordinates. Also file2 contains read names which are not present in file1.
I want to write a script which finds read names and coordinates in file2 that match those in file1 and adds the read length from file1 to file2. i.e. changes the last column of file2:
Read SRR6691737.359236 11999 12313; Dup 277
to:
Read SRR6691737.359236/0_14228//11999_12313; Dup 277
any help?
If unclear how your input files look look like.
You write:
I have two txt files: File1 is a tsv with 9 columns. Following is
its first row (SRR6691737.359236/0_14228//11999_12313 is the first
column and after Repeat is the 9th column):
SRR6691737.359236/0_14228//11999_12313 Censor repeat 5 264 1169 + . Repeat BOV, ancd A2 SINE 1 260 9
If I try to check the columns (and put them in a 'Column,Value' pair):
Column,Value
1,SRR6691737.359236/0_14228//11999_12313
2,Censor
3,repeat
4,5
5,264
6,1169
7,+
8,.
9,Repeat
10,BOVA2
11,SINE
12,1
13,260
14,9
That seems to have 14 columns, you specify 9 columns...
Can you edit your question, and be clear about this?
i.e. specify as csv
SRR6691737.359236/0_14228//11999_12313,Censor,repeat,5,.....
Added info, after feedback :
file1 contains the following fields (tab-, ancd separated):
SRR6691737.359236/0_14228//11999_12313
Censor
5
264
1169
+
.
Repeat BOVA2 SINE 1 260 9
You want to convert this (using a script) to a tab-separated file:
CM011822.1
reefer
distance
63738705
63738727
+
.
Read SRR6691737.359236 11999 12313
Dup 277
More info is needed to solve this!
field 1: How/Where is the info for 'CM011822.1' coming from?
field 2 and 3: 'reefer'/'distance'. Is this fixed text, should, ancd these fields always contain these texts or are there exceptions?
field 4 and 5: Where are these values (63738705 ; 63738727) coming from?
OK, it's clear that there are more questions to be asked than can give here …
second change...:
create a file, name if 'mani.awk':
FILENAME=="file1"{
split($1,a,"/");
x=a[1] " " a[4];
y=x; gsub(/_/," ",y);
r[y]=$1;
c=1; for (i in r) { print c++,i,"....",r[i]; }
}
FILENAME=="file2"{
print "<--", $0, "--> " ;
for (i in r) {
if ($9 ~ i) {
print "B:" r[i];
split(r[i],b,"/");
$9="Read " r[i];
print "OK";
}
};
print "<--", $0, "--> " ;
}
After this gawk -f mani.awk file1 file2 should produce the correct result.
If not, than i suggest you to learn AWK 😉, and change the script as needed.
I need to find this number: The '3' on the second column where there is a '3' in the first column.
(This is an example. I also could need to find the '25' on the second column where there is a '36' on the first column).
The numbers on the first column are unique. There is no other row starting with a '3' on the first column.
This data is in a text file, and I'd like to use bash (awk, sed, grep, etc.)
I need to find a number on the second column, knowing the (unique) number of the first column.
In this case, I need to grep the 3 below the 0, on the second column (and, in this case, third row):
108 330
132 0
3 3
26 350
36 25
43 20
93 10
101 3
102 3
103 1
This is not good enough, because the grep should apply only to the first column elements, so the output would by a single number:
cat foo.txt | grep -w '3' | awk '{print $2}'
Although it is a text file, let's imagine it is a MySQL table. The needed query would be:
SELECT column2 WHERE column1='3'
In this case (text file), I know the input value of the first column (eg, 132, or 93, or 3), and I need to find the value of the same row, in the second column (eg, 0, 10, 3, respectively).
Regards,
Assuming you mean "find rows where the first column contains a specific string exactly, and the second contains that string somewhere within";
awk -v val="3" '$1 == val && $2 ~ $1 { print $2 }' foo.txt
Notice also how this avoids a useless use of cat.
You can find repeated patterns by grouping the first match and looking for a repeat. This works for your example:
grep -wE '([^ ]+) +\1' infile
Output:
3 3
I have a column of several rows like this:
20.000
15.000
42.500
42.500
45.000
45.000
50.000
50.000
50.000
50.000
50.000
50.000
50.000
50.000
50.000
and I need to end up with a file where:
first element is 20/2
second element is the previous value + 15/2
third element is the previous values + 42.5/2
an so on until the end
My problem is how to do the "loop".
Perl to the rescue:
perl -lne 'print $s += $_ / 2' input-file > output-file
-l removes newlines from input and adds them to output
-n reads the input line by line, executing the code for each
$_ is the value read from each line
/ 2 is division by 2
+= is the operator that adds its ride hand side to its left hand side and stores the result in the left hand side, returning the new value. I named the variable $s as in "sum".
simply,
$ awk '{print v+=$1/2}' file
10
17.5
38.75
60
82.5
105
130
155
180
205
230
255
280
305
330
you can set printf formatting if needed
Try this:
awk '{prev += ($0) / 2; printf("%.3f\n", prev);}' a2.txt
Input:
20.000
15.000
42.500
42.500
45.000
45.000
50.000
50.000
50.000
50.000
50.000
50.000
50.000
50.000
50.000
Output:
10.000
17.500
38.750
60.000
82.500
105.000
130.000
155.000
180.000
205.000
230.000
255.000
280.000
305.000
330.000
I guess you need output to be one line:
awk '{s+=$1/2; out = out s " ";} END{print out}' file
#=> 10 17.5 38.75 60 82.5 105 130 155 180 205 230 255 280 305 330
There's an extra space at the end which I think has no harm.
You can remove it if you don't want it.
I think you might looking for for loop
awk '{for (i = 1; i <= NF; i++) print temp = temp + $i/2 }' filename
remember one thing, i refers to a column number if you want to run this operation in only one column you can change value of
i = column number; i <= column number;
You can use this loop for complex scenario.
If you want to change the separator you can use parameter like -F and the separator.
awk -F ":" '{}' filename
I created a script that will auto-login to router and checks for current CPU load, if load exceeds a certain threshold I need it print the current CPU value to the standard output.
i would like to search in script o/p for a certain pattern (the value 80 in this case which is the threshold for high CPU load) and then for each instance of the pattern it will check if current value is greater than 80 or not, if true then it will print 5 lines before the pattern followed by then the current line with the pattern.
Question1: how to loop over each instance of the pattern and apply some code on each of them separately?
Question2: How to print n lines before the pattern followed by x lines after the pattern?
ex. i used awk to search for the pattern "health" and print 6 lines after it as below:
awk '/health/{x=NR+6}(NR<=x){print}' ./logs/CpuCheck.log
I would like to do the same for the pattern "80" and this time print 5 lines before it and one line after....only if $3 (representing current CPU load) is exceeding the value 80
below is the output of auto-login script (file name: CpuCheck.log)
ABCD-> show health xxxxxxxxxx
* - current value exceeds threshold
1 Min 1 Hr 1 Hr
Cpu Limit Curr Avg Avg Max
-----------------+-------+------+------+-----+----
01 80 39 36 36 47
WXYZ-> show health xxxxxxxxxx
* - current value exceeds threshold
1 Min 1 Hr 1 Hr
Cpu Limit Curr Avg Avg Max
-----------------+-------+------+------+-----+----
01 80 29 31 31 43
Thanks in advance for the help
Rather than use awk, you could use the -B and -A and switches to grep, which print a number of lines before and after a pattern is matched:
grep -E -B 5 -A 1 '^[0-9]+[[:space:]]+80[[:space:]]+(100|9[0-9]|8[1-9])' CpuCheck.log
The pattern matches lines which start with some numbers, followed by spaces, followed by 80, followed by a number greater between 81 and 100. The -E switch enables extended regular expressions (EREs), which are needed if you want to use the + character to mean "one or more". If your version of grep doesn't support EREs, you can instead use the slightly more verbose \{1,\} syntax:
grep -B 5 -A 1 '^[0-9]\{1,\}[[:space:]]\{1,\}80[[:space:]]\{1,\}\(100\|9[0-9]\|8[1-9]\)' CpuCheck.log
If grep isn't an option, one alternative would be to use awk. The easiest way would be to store all of the lines in a buffer:
awk 'f-->0;{a[NR]=$0}/^[0-9]+[[:space:]]+80[[:space:]]+(100|9[0-9]|8[1-9])/{for(i=NR-5;i<=NR;++i)print i, a[i];f=1}'
This stores every line in an array a. When the third column is greater than 80, it prints the previous 5 lines from the array. It also sets the flag f to 1, so that f-->0 is true for the next line, causing it to be printed.
Originally I had opted for a comparison $3>80 instead of the regular expression but this isn't a good idea due to the varying format of the lines.
If the log file is really big, meaning that reading the whole thing into memory is unfeasible, you could implement a circular buffer so that only the previous 5 lines were stored, or alternatively, read the file twice.
Unfortunately, awk is stream-oriented and doesn't have a simple way to get the lines before the current line. But that doesn't mean it isn't possible:
awk '
BEGIN {
bufferSize = 6;
}
{
buffer[NR % bufferSize] = $0;
}
$2 == 80 && $3 > 80 {
# print the five lines before the match and the line with the match
for (i = 1; i <= bufferSize; i++) {
print buffer[(NR + i) % bufferSize];
}
}
' ./logs/CpuCheck.log
I think the easiest way with awk, by reading the file.
This should use essentially 0 memory except whatever is used to store the line numbers.
If there is only one occurence
awk 'NR==FNR&&$2=="80"{to=NR+1;from=NR-5}NR!=FNR&&FNR<=to&&FNR>=from' file{,}
If there are more than one occurences
awk 'NR==FNR&&$2=="80"{to[++x]=NR+1;from[x]=NR-5}
NR!=FNR{for(i in to)if(FNR<=to[i]&&FNR>=from[i]){print;next}}' file{,}
Input/output
Input
1
2
3
4
5
6
7
8
9
10
11
12
01 80 39 36 36 47
13
14
15
16
17
01 80 39 36 36 47
18
19
20
Output
8
9
10
11
12
01 80 39 36 36 47
13
14
15
16
17
01 80 39 36 36 47
18
How it works
NR==FNR&&$2=="80"{to[++x]=NR+5;from[x]=NR-5}
In the first file if the second field is 80 set to and from to the record number + or - whatever you want.
Increment the occurrence variable x.
NR!=FNR
In the second file
for(i in to)
For each occurrence
if(FNR<=to[i]&&FNR>=from[i]){print;next}
If the current record number(in this file) is between this occurrences to and from then print the line.Next prevents the line from being printed multiple times if occurrences of the pattern are close together.
file{,}
Use the file twice as two args. the {,} expands to file file