I am new to Linux scripting.
I need the lines from a file, and one by one write it in a specific line of another file.
Example:
File1.txt:
line1
line2
line3
File2.txt:
abc
abc
xxx
I need to write first "line1" instead of the 3rd line of File2.txt, then do some operations with this file, then write "line2" instead of the 3rd line of File2.txt and so on.
At the moment this is what I have
for n in {1..5}
do
a=$(sed '24!d' File1) #read string 24
echo $a
sed -i '1s/.*/a/' File2.txt
done
Now instead of 24 in line 3 i should put the variable n used in the cycle. Is it possible?
The same thing is in line 5, where >"a" is supposed to be a variable, but the program changes the first line of File2.txt with "a".
Can I use this functions or I need to use other functions (if yes what functions?)?
Try,
sed '2r t2' t1
If you want to perform any operation on file 2 you can simply use
sed 2r<(cat t2) t1 ##You can change command (cat t2) as per your need
I think this will solve your problem.
root#ubuntu:~/T/e/s/t# cat t1
test
asdf
xyza
root#ubuntu:~/T/e/s/t# cat t2
sample line 1
sample line 2
sample line 3
sample line 4
root#ubuntu:~/T/e/s/t# sed 2r<(cat t2) t1
test
asdf
sample line 1
sample line 2
sample line 3
sample line 4
xyza
Details
2: Second line
r: read from the file
If you want to commit this operation, you can use sed -i option. Refer to man sed for more details.
Thanks, Benjamin W. for missing scenario (sed '2r t2' t1)
Related
i have first file with 3 lines :
test1
test2
test3
i use the grep cmd to search every lines from directory with 10 files :
grep -Ril "test2"
result is :
/usr/src/files/rog.txt
i need grep to delete the 5 lines from the finiding file , 2 lines before and 2 after test2
please can help me for good grep use .
There is one way to use the -A and -B options of grep. But to use it, you need to perform two steps.
First you select all matches with the previous and next line and with that list, but that would have some side effects (which in your application are probably acceptable).
To do this, you issue the following commands:
grep -A 2 -B 2 "test2" file1.txt > negative.txt
grep -v -f negative.txt file1.txt
The first line outputs all findings of test2 in file1.txt accompanied by the 2 preceding and 2 succeeding lines of each line found. If I got your question right, this is the "negative" of the lines you want. The second line now lists all lines from file1.txt which do not correspond to a "negative line". This should be close to what you need.
There is only one side effect which you should know. If file1.txt contains duplicate lines like this:
test1
test2
test3
test4
...
test11
test12
test3
test4
The code above would also filter out the two last lines, even though there is no "test2" line near because they are duplicates of the lines 3 and 4 which were written to "negative.txt" because of line 2. But if you're processing file lists probably duplicates are no issue.
I am new to using the Mac terminal. I need to add a tab delimited column to a text file with 3 existing columns. The columns look pretty much like this:
org1 1-20 1-40
org2 3-35 6-68
org3 16-38 40-16
etc.
I need them to look like this:
org1 1-20 1-40 1
org2 3-35 6-68 2
org3 16-38 40-16 3
etc.
My apologies if this question has been covered. Answers to similar questions are sometimes exceedingly esoteric and are not easily translatable to this specific situation.
In awk. print the record and the required tab and row count after it:
$ awk '{print $0 "\t" NR }' foo
org1 1-20 1-40 1
org2 3-35 6-68 2
org3 16-38 40-16 3
If you want to add the line numbers to the last column:
perl -i -npe 's/$/"\t$."/e' file
where
-i replaces the file in-pace (remove, if you want to print the result to the standard output);
-n causes Perl to apply the substitution to each line from the file, just like sed;
-p prints the result of expression;
-e accepts Perl expression;
s/.../.../e substitutes the first part to the second (delimited with slash), and the e flag causes Perl to evaluate the replacement as Perl expression;
$ is the end-of-line anchor;
$. variable keeps the number of the current line
In other words, the command replaces the end of the line ($) with a tab followed by the line number $..
You can paste the file next to the same file with line numbers prepended (nl), and all the other columns removed (cut -f 1):
$ paste infile <(nl infile | cut -f 1)
org1 1-20 1-40 1
org2 3-35 6-68 2
org3 16-38 40-16 3
The <(...) construct is called process substitution and basically allows you to treat the output of a command like a file.
I want to replace the first value (in first column and line so here 1) and add one to this value, so I have a file like this
1
1 1
2 5
1 6
I use this sentence
read -r a < file
echo $aa
sed "s/$aa/$(($aa + 1))/" file
# or
sed 's/$aa/$(($aa + 1))/' file
But when I make that, he change all first column one into two. I have try to change the quote but it make nothing.
restrict the script to first line only, i.e.
sed '1s/old/new/'
awk might be a better tool for this.
awk 'NR==1{$1=$1+1}1'
for the first line add 1 to the first field and print. Can be rewritten as
awk 'NR==1{$1+=1}1'
or
awk 'NR==1{$1++}1'
perl -p0e 's/(\d+)/$1+1/e' file
Possible duplicate: Bash tool to get nth line from a file
I need to select the nth line a file, this line is defined be the variable PBS_ARRAYID
The accept solution in the another question (link above) is:
sed 'NUMq;d' job_params
I'm trying to adapt for the variable like (actually I try lots of stuff, but is the one that makes more sense):
sed "${PBS_ARRAYID}q;d" job_params
But I get the following error:
sed: -e expression #1, char 2: invalid usage of line address 0
What am I doing wrong?
Your solution is correct:
sed "${PBS_ARRAYID}q;d" job_params
The only problem is that sed considers the first line to be line 1 (thanks rici), so PBS_ARRAYID must be in range [1,X], where X is the number of lines on the input file, or:
wc -l job_params
Here is an awk example.
Lets say we have this file:
cat file
1 one
2 two
3 three
4 four
5 five
6 six
7 seven
8 eight
9 nine
Then we have theses variable:
var="four"
number=2
Then this awk gives:
awk '$0~v {f=NR} f && f+n==NR' v="$var" n="$number" file
6 six
I have multiple files which have the same structure but not the same data. Say their names are values_#####.txt (values_00001.txt, values_00002.txt, etc.).
I want to extract a specific line from each file and copy it in another file. For example, I want to extract the 8th line from values_00001.txt, the 16th line from values_00002.txt, the 24th line from values_00003.txt and so on (increment = 8 each time), and copy them line by line in a new file (say values.dat).
I am new to shell scripting, I tried to use sed, but I didn't figure out how to do that.
Thank you in advance for your answers !
I believe ordering of files is also important to make sure you get output in desired sequence.
Consider this script:
n=8
while read f; do
sed $n'q;d' "$f" >> output.txt
((n+=8))
done < <(printf "%s\n" values_*.txt|sort -t_ -nk2,2)
This can make it:
for var in {1..NUMBER}
do
awk -v line=$var 'NR==8*line' values_${var}.txt >> values.dat
done
Explanation
The for loop is basic.
-v line=$var "gives" the $var value to awk, so it can be used with the variable line.
'NR==8*line' prints the line number 8*{value we are checking}.
values_${var}.txt gets the file values_1.txt, values_2.txt, and so on.
>> values.dat redirects to values.dat file.
Test
I created 3 equal files a1, a2, a3. They contain 30 lines, being each one the line number:
$ cat a1
1
2
3
4
...
Executing the one liner:
$ for var in {1..3}; do awk -v line=$var 'NR==8*line' a${var} >> values.dat; done
$ cat values.dat
8
16
24