bash - print id number on multiple lines in new txt file - bash

I am trying to concatenate multiple .txt files into one .txt file. Each individual file has one column with 178 rows and is tied to one particular person. I would like the concatenated file to have three columns: person's ID #; person's session #; value taken from the individual .txt file, e.g.:
desired output format (minus bullet points):
1 1 000
1 1 001
....
1 1 177
1 2 000
1 2 001
...
1 2 177
My current script prints the output with ID # and session # only printed on the first line of each person's 178 lines and the remaining 177 values printed to column 1 underneath the ID #, e.g.:
1 1 000
001
002
...
177
1 2 000
001
002
...
177
2 1 000
001
...
I would like help getting the ID # and session # next to each of the 178 rows taken from each person's individual .txt file, not just in the first row as it currently prints.
Code below:
for subject in 170; do
for session in 1 2; do
cd ${datadir}
ts_SalienceNetwork=$(cat sub${subject}.txt | awk '{print $1}')
echo -e "${subject}\t${session}\t${ts_SalienceNetwork}" >> concat_data.txt
done
done

$t_SalienceNetwork contains the entire file, and you're just putting the other variables before it, not before each line.
Use awk to print the first column of each row preceded by the variables on each line, no need for the variable.
cd "$datadir" # no need to do this each time through the loop
for subject in 170; do
for session in 1 2; do
awk -v subject="$subject" -v session="$session" '{printf("%s\t%s\t%s\n", subject, session, $1)}' "sub$subject.txt"
done
done >> concat_data.txt

Related

How to divide numbers stored as text into many parts in awk or maybe sed or other?

I need to divide my text file. In my text file, I have numbers. This is a small fragment of my input file. In my text file, I have numbers from 29026 to 58050.
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 ...........................................................
................................................58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
I must create 225 index groups. Every group must have 129 numbers. So my output will look like
[ Lipid 1 ]
29026 29027 29028 29029 ...................................
...............
...........................29150 29151 29152 29153 29154
[ Lipid 2 ]
...
...
[ Lipid 225 ]
57921 57922 57923 57924 57925 57926......
.....
.......................
58044 58045 58046 58047 58048 58049 58050
Do you have any idea?
Edit
My text file
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 29046 29047 29048 29049 29050 29051 29052 29053 29054 29055
29056 29057 29058 29059 29060 29061 29062 29063 29064 29065 29066 29067 29068 29069 29070
29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085
29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100
29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115
29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130
29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145
29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160
29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175
29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190
29191 29192 29193 29194 29195 29196 29197 29198 29199 29200 29201 29202 29203 29204 29205
29206 29207 29208 29209 29210 29211 29212 29213 29214 29215 29216 29217 29218 29219 29220
29221 29222 29223 29224 29225 29226 29227 29228 29229 29230 29231 29232 29233 29234 29235
29236 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250
29251 29252 29253 29254 29255 29256 29257 29258 29259 29260 29261 29262 29263 29264 29265
29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280
29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295
29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310
29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325
29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340
29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355
29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370
29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385
29386 29387 29388 29389 29390 29391 29392 29393 29394 29395 29396 29397 29398 29399 29400
29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415
29416 29417 29418 29419 29420 29421 29422 29423 29424 29425 29426 29427 29428 29429 29430
here I have thousands of lines, but I will not paste all of this text
57736 57737 57738 57739 57740 57741 57742 57743 57744 57745 57746 57747 57748 57749 57750
57751 57752 57753 57754 57755 57756 57757 57758 57759 57760 57761 57762 57763 57764 57765
57766 57767 57768 57769 57770 57771 57772 57773 57774 57775 57776 57777 57778 57779 57780
57781 57782 57783 57784 57785 57786 57787 57788 57789 57790 57791 57792 57793 57794 57795
57796 57797 57798 57799 57800 57801 57802 57803 57804 57805 57806 57807 57808 57809 57810
57811 57812 57813 57814 57815 57816 57817 57818 57819 57820 57821 57822 57823 57824 57825
57826 57827 57828 57829 57830 57831 57832 57833 57834 57835 57836 57837 57838 57839 57840
57841 57842 57843 57844 57845 57846 57847 57848 57849 57850 57851 57852 57853 57854 57855
57856 57857 57858 57859 57860 57861 57862 57863 57864 57865 57866 57867 57868 57869 57870
57871 57872 57873 57874 57875 57876 57877 57878 57879 57880 57881 57882 57883 57884 57885
57886 57887 57888 57889 57890 57891 57892 57893 57894 57895 57896 57897 57898 57899 57900
57901 57902 57903 57904 57905 57906 57907 57908 57909 57910 57911 57912 57913 57914 57915
57916 57917 57918 57919 57920 57921 57922 57923 57924 57925 57926 57927 57928 57929 57930
57931 57932 57933 57934 57935 57936 57937 57938 57939 57940 57941 57942 57943 57944 57945
57946 57947 57948 57949 57950 57951 57952 57953 57954 57955 57956 57957 57958 57959 57960
57961 57962 57963 57964 57965 57966 57967 57968 57969 57970 57971 57972 57973 57974 57975
57976 57977 57978 57979 57980 57981 57982 57983 57984 57985 57986 57987 57988 57989 57990
57991 57992 57993 57994 57995 57996 57997 57998 57999 58000 58001 58002 58003 58004 58005
58006 58007 58008 58009 58010 58011 58012 58013 58014 58015 58016 58017 58018 58019 58020
58021 58022 58023 58024 58025 58026 58027 58028 58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
Here is how I understood your problem:
The input is a text file in several lines, with fifteen numbers on each line, separated by spaces or tabs. Some lines (perhaps the last one) may have fewer than fifteen numbers. (In fact in the solution below it doesn't matter how many numbers are on each line.)
You must group the numbers into sets of 129 numbers each, in sequence. The last group may have less than 129 numbers, if the input cardinality is not an exact multiple of 129. In the solution below, it doesn't matter how many input numbers there are (and therefore how many groups there will be in the output).
For each group of 129 numbers, you must get a few lines in the output. First, a title or label that says [Lipid n] where n is the line number, and then the numbers in that group, shown fifteen per line (so, there will be eight full lines and a ninth line with only 9 numbers on it: 129 = 15 * 8 + 9).
Here's how you can do this. First let's start with a small example, and then we can look at what must be changed for a more general solution.
I will assume that your inputs can be arbitrary numbers of any length; of course, if they are consecutive numbers like you showed in your sample data, then the problem is trivial and completely uninteresting. So let's assume your numbers are in fact any numbers at all. (Not really; I wrote the solution for non-negative integers; but it can be re-written for "tokens" of non-blank characters separated by blanks.)
I start with the following input file:
$ cat lipid-inputs
124 150 178 111 143 177 116
154 194 139 183 132 180 133
185 142 101 159 122 184 151
120 188 161 136 113 189 170
We want to group the 28 input numbers into sets of ten numbers each, and present the output with (at most) seven numbers per line. So: There will be two full groups, and a third group with only eight member numbers (since we have only 28 inputs). The desired output looks like this:
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
Strategy: First write the input numbers one per line, so we can then arrange them ten per line (ten: cardinality of desired groups in the output). Then add line numbers (which will go into the label lines). Then edit the "line number" lines to add the "lipid" stuff, and break the data lines into shorter lines, showing seven tokens each (possibly fewer on the last line in each group).
Implementation: tr to break up the tokens one per line; paste reading repeatedly from standard input, ten stdin lines for each output line; then sed = to add the line numbers (on separate lines); and finally a standard sed for the final editing. The command looks like this:
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' - - - - - - - - - - |
> sed = | sed -E 's/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){7}) /\1\n/g'
The output is the one I showed already.
To generalize (so you can apply to your problem): The number of tokens per line in the input file is irrelevant. To get 15 tokens per line in the output, change the hard-coded number 7 to 15 on the last line in the command shown above. And to allocate 129 tokens per line, instead of 10, what needs to be changed is the paste command: I show it reading ten times from stdin. You need 129. So it would be better to create a string of 129 dashes separated by space, in a simple command - rather than hard-coding - and to use that string as an input to paste. I show how to do this for my example, you will adapt for yours.
Define variables to hold your relevant values: how many tokens per lipid (129 in your case, 10 in mine) and how many tokens per line in the output (15 in your case, 7 in mine).
$ tokens_per_lipid=10
$ tokens_per_line=7
Then create a variable to hold the string - - - - [...] needed in the paste command. There are several ways to do this, here's just one:
$ paste_arg=$(yes '-' | head -n $tokens_per_lipid | tr '\n' ' ')
Let's check it:
$ echo $paste_arg
- - - - - - - - - -
OK, so let's re-write the command that does what you need. We must use double-quotes for the argument to sed to allow variable expansion.
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' $paste_arg |
> sed = | sed -E "s/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){$tokens_per_line}) /\1\n/g"
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
I have no clue what you really are trying to do, but maybe this does what you want
< input sed -zE 's/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
It uses Sed to insert the [ Lipid # ] string (with some newline) every 129 occurrences of [0-9]+[^0-9]+ (which is 1 or more digits followed by 1 or more non-digits); then it uses Awk to substitute # with numbers from one (to do so, it interprets the ] as the record separator, and so it can change # to the number of the record NR); finally it uses Sed again to remove the last line which appears as the last record separator from the Awk processing.
I used Awk for inserting the increasing numbers as there's no easy way to do maths in Sed; I used Sed to break the file and insert text in between as requested as I find it easier than doing it in Awk.
If you need to have all numbers on one line in the output, you can do
< input sed -zE 's/[^0-9]+/ /g;s/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
where I have just added s/[^0-9]+/ /g; to collapse whatever happens to be between numbers to a single whitespace.

find and replace substrings in a file which match strings in another file

I have two txt files: File1 is a tsv with 9 columns. Following is its first row (SRR6691737.359236/0_14228//11999_12313 is the first column and after Repeat is the 9th column):
SRR6691737.359236/0_14228//11999_12313 Censor repeat 5 264 1169 + . Repeat BOVA2 SINE 1 260 9
File2 is a tsv with 9 columns. Following is its first row (after Read is the 9th column):
CM011822.1 reefer discordance 63738705 63738727 . + . Read SRR6691737.359236 11999 12313; Dup 277
File1 contains information of read name (SRR6691737.359236), read length (0_14228) and coordinates (11999_12313) while file two contains only read name and coordinate. All read names and coordinates in file1 are present in file2, but file2 may also contain the same read names with different coordinates. Also file2 contains read names which are not present in file1.
I want to write a script which finds read names and coordinates in file2 that match those in file1 and adds the read length from file1 to file2. i.e. changes the last column of file2:
Read SRR6691737.359236 11999 12313; Dup 277
to:
Read SRR6691737.359236/0_14228//11999_12313; Dup 277
any help?
If unclear how your input files look look like.
You write:
I have two txt files: File1 is a tsv with 9 columns. Following is
its first row (SRR6691737.359236/0_14228//11999_12313 is the first
column and after Repeat is the 9th column):
SRR6691737.359236/0_14228//11999_12313 Censor repeat 5 264 1169 + . Repeat BOV, ancd A2 SINE 1 260 9
If I try to check the columns (and put them in a 'Column,Value' pair):
Column,Value
1,SRR6691737.359236/0_14228//11999_12313
2,Censor
3,repeat
4,5
5,264
6,1169
7,+
8,.
9,Repeat
10,BOVA2
11,SINE
12,1
13,260
14,9
That seems to have 14 columns, you specify 9 columns...
Can you edit your question, and be clear about this?
i.e. specify as csv
SRR6691737.359236/0_14228//11999_12313,Censor,repeat,5,.....
Added info, after feedback :
file1 contains the following fields (tab-, ancd separated):
SRR6691737.359236/0_14228//11999_12313
Censor
5
264
1169
+
.
Repeat BOVA2 SINE 1 260 9
You want to convert this (using a script) to a tab-separated file:
CM011822.1
reefer
distance
63738705
63738727
+
.
Read SRR6691737.359236 11999 12313
Dup 277
More info is needed to solve this!
field 1: How/Where is the info for 'CM011822.1' coming from?
field 2 and 3: 'reefer'/'distance'. Is this fixed text, should, ancd these fields always contain these texts or are there exceptions?
field 4 and 5: Where are these values (63738705 ; 63738727) coming from?
OK, it's clear that there are more questions to be asked than can give here …
second change...:
create a file, name if 'mani.awk':
FILENAME=="file1"{
split($1,a,"/");
x=a[1] " " a[4];
y=x; gsub(/_/," ",y);
r[y]=$1;
c=1; for (i in r) { print c++,i,"....",r[i]; }
}
FILENAME=="file2"{
print "<--", $0, "--> " ;
for (i in r) {
if ($9 ~ i) {
print "B:" r[i];
split(r[i],b,"/");
$9="Read " r[i];
print "OK";
}
};
print "<--", $0, "--> " ;
}
After this gawk -f mani.awk file1 file2 should produce the correct result.
If not, than i suggest you to learn AWK 😉, and change the script as needed.

How can I compare rows in Unix text files and add them together in another text file?

I have a text file 1 that has 3 columns. The first column contains a number, the second a word (which can be either a sting like dog or a number like 1050), the third column a TAG in capital letters.
I have another text file 2 that has 2 columns. The first column has a number, the second one has a TAG in capital letters.
I want to compare every row in my text file 1 with every row in my text file 2. If the TAG in column [3] in text file 1 is the same as the TAG in column [2] in text file 2, then I want to store the number in text file 1 next to the number in text file 2 next to the word in text file 1. There are no duplicate TAGS in text file 2 and there are no duplicate words in text file 1.
Illustration:
Text file 1
2 2737 HPL
32 hello PLS
3 world PLS
323 . OPS
Text file 2
342 HPL
56 PLS
342 DCC
4 OPS
I want:
2 342 2737
32 56 hello
3 56 world
323 4 .
You can do this in awk like this:
awk 'FNR==NR { h[$2] = $1; next } $3 in h { print $1, h[$3], $2 }' file2 file1
The first part saves the key and column from file 2 in an associative array (h), the second part compares column 3 from file 1 to this array and prints the relevant parts.

Merging text files in csh while preserving place rows

I have about 100 text files with two columns that I would like to merge into a single file in a c shell script by using factor "A".
For example, I have file A that looks like this
A B1
1 100
2 200
3 300
4 400
and File B looks like this
A B2
1 100
2 200
3 300
4 400
5 300
6 400
I want the final file C to look like this:
A B1 B2
1 100 100
2 200 200
3 300 300
4 400 400
5 300
6 400
The cat function only puts the files on top of one another and pastes them into file C. I would like to put the data next to each other. Is this possible?
to meet your exact spec, this will work. If the spec changes, you'll need to play with this some,
paste -d' ' factorA factorB \
| awk 'NF==4||NF==3{print $1, $2, $3} NF==2{print$1, $2}' \
> factorC
# note, no spaces or tabs after each of the contintuation chars `\` at end of lines!
output
$ cat factorC
A B1 B2
1 100 100
2 200 200
3 300 300
4 400 400
5 300
6 400
Not sure how you get bold headers to "trasmit" thru unix pipes. ;->
Recall that awk programs all have a basic underlying structure, i.e.
awk 'pattern{action}' file
So pattern can be a range of lines, a reg-exp, an expression (NF==4), missing, or a few other things.
The action is what happens when the pattern is matched. This is more traditional looking code.
If no pattern specified, then action applies to all lines read. If no action is specfied, but the pattern matches, then the line is printed (without further ado).
NF means NumberOfFields in the current line, so NF==2 will only process line with 2 fields (the trailing records in factorB).
The || is a logical OR operator, so that block will only process records, where the number of fields is 3 OR 4. Hopefully, the print statements are self-explanatory.
The , separating $1,$2,$3 (for example) is the syntax that converts to awk's internal variable OFS, which is OutputFieldSeparator, which can be assigned like OFS="\t" (to give an OFS of tab char), or as in this case, we are not specifying a value, so we're getting the default value for OFS, which is the space char (" ") (no quotes!)
IHTH

Average number of rows in 10000 text files

I have a set of 10000 text files (file1.txt, file2.txt,...file10000.txt). Each one has a different number of rows. I'd like to know which is the average number of rows, among these 10000 files, excluding the last row. For example:
File1:
a
b
c
d
last
File2:
a
b
c
last
File2:
a
b
c
d
e
last
here I should obtain 4 as result. I tried with python but it requires too much time to read all the files. How could I do with a shell script?
Here's one way:
touch file{1..3}.txt
file 1 has 1 line, file 2 two lines and so on...
$ for i in {1..3}; do wc -l file${i}.txt; done | awk '{sum+=$1}END{print sum/NR}'
2

Resources