Count lines in a file using bash and add the result in the first line - bash

I have a text file:
10 20 30 40
50 60 60 80
By using
$ wc -l file.txt
2 file.txt
I get the count, but I want to add that result in my text file.
I want the result to be like this:
2
10 20 30 40
50 60 70 80
What should I do in order to prepend the result in the text file?
I have many of these files in one folder, and instead of providing a single text file at a time, I want to provide all the files at the same time.

For one file, you can do this:
wc -l < file.txt | cat - file.txt > tmp && mv tmp file.txt
This uses cat to concatenate the result of wc -l < file.txt with the contents of file.txt. The result is written to a temporary file, then the original file is overwritten.
For many files (e.g. all files ending in .txt), you can use a loop:
for file in *.txt; do
wc -l < "$file" | cat - "$file" > tmp && mv tmp "$file"
done

Try:
echo `wc -l myfile.txt` | cat - myfile.txt > tmp && mv tmp myfile.txt

Awk may be your friend :
awk 'BEGIN{RS="^$"}NR=FNR{printf "%d\n%s",gsub(/\n/,"\n"),$0}' file \
> temp && mv temp file

Related

Why is this bash loop failing to concatenate the files?

I am at my wits end as to why this loop is failing to concatenate the files the way I need it. Basically, lets say we have following files:
AB124661.lane3.R1.fastq.gz
AB124661.lane4.R1.fastq.gz
AB124661.lane3.R2.fastq.gz
AB124661.lane4.R2.fastq.gz
What we want is:
cat AB124661.lane3.R1.fastq.gz AB124661.lane4.R1.fastq.gz > AB124661.R1.fastq.gz
cat AB124661.lane3.R2.fastq.gz AB124661.lane4.R2.fastq.gz > AB124661.R2.fastq.gz
What I tried (and didn't work):
Create and save file names (AB124661) to a ID file:
ls -1 R1.gz | awk -F '.' '{print $1}' | sort | uniq > ID
This creates an ID file that stores the samples/files name.
Run the following loop:
for i in `cat ./ID`; do cat $i\.lane3.R1.fastq.gz $i\.lane4.R1.fastq.gz \> out/$i\.R1.fastq.gz; done
for i in `cat ./ID`; do cat $i\.lane3.R2.fastq.gz $i\.lane4.R2.fastq.gz \> out/$i\.R2.fastq.gz; done
The loop fails and concatenates into empty files.
Things I tried:
Yes, the ID file is definitely in the folder
When I run with echo it shows the cat command correct
Any help will be very much appreciated,
Best,
AC
why are you escaping the \> ? That's going to result in a cat: '>': No such file or directory instead of a redirection.
Don't read lines with for
while IFS= read -r id; do
cat "${id}.lane3.R1.fastq.gz" "${id}.lane4.R1.fastq.gz" > "out/${id}.R1.fastq.gz"
cat "${id}.lane3.R2.fastq.gz" "${id}.lane4.R2.fastq.gz" > "out/${id}.R2.fastq.gz"
done < ./ID
Let say you have id stored in file ./ID per line
while read -r line; do
cat "$line".lane3.R1.fastq.gz "$line".lane4.R1.fastq.gz > "$line".R1.fastq.gz
cat "$line".lane3.R2.fastq.gz "$line".lane4.R2.fastq.gz > "$line".R2.fastq.gz
done < ./ID
A pure shell solution could be like that:
for file in *.fastq.gz; do
id=${file%%.*}
[ -e "$id".R1.fastq.gz ] || cat "$id".*.R1.fastq.gz > "$id".R1.fastq.gz
[ -e "$id".R2.fastq.gz ] || cat "$id".*.R2.fastq.gz > "$id".R2.fastq.gz
done
Alternatively:
printf '%s\n' *.fastq.gz | cut -d. -f1 | sort -u |
while IFS= read -r id; do
cat "$id".*.R1.fastq.gz > "$id".R1.fastq.gz
cat "$id".*.R2.fastq.gz > "$id".R2.fastq.gz
done
This solution assumes filenames of interest don't contain newline characters.

how to add beginning of file to another file using loop

I have files 1.txt, 2.txt, 3.txt and 1-bis.txt, 2-bis.txt, 3-bis.txt
cat 1.txt
#ok
#5
6
5
cat 2.txt
#not ok
#56
13
56
cat 3.txt
#nothing
#
cat 1-bis.txt
5
4
cat 2-bis.txt
32
24
cat 3-bis.txt
I would like to add lines starting with # (from non bis files) at the beginning of files "bis" in order to get:
cat 1-bis.txt
#ok
#5
5
4
cat 2-bis.txt
#not ok
#56
32
24
cat 3-bis.txt
#nothing
#
I was thinking to use grep -P "#" to select lines with # (or maybe sed -n) but I don't know how to loop files to solve this problem
Thank you very much for your help
You can use this solution:
for f in *-bis.txt; do
{ grep '^#' "${f//-bis}"; cat "$f"; } > "$f.tmp" && mv "$f.tmp" "$f"
done
If you only want # lines at the beginning of the files only then use:
Change
grep '^#' "${f//-bis}"
with:
awk '!/^#/{exit}1' "${f//-bis}"
You can loop over the ?.txt files and use parameter expansion to derive the corresponding bis- filename:
for file in ?.txt ; do
bis=${file%.txt}-bis.txt
grep '^#' "$file" > tmp
cat "$bis" >> tmp
mv tmp "$bis"
done
You don't need grep -P, simple grep is enough. Just add ^ to only match the octothorpes at the beginning of a line.

Bash count non-commented lines and write to output with filename

I would like to count the non-commented lines in multiple files and append the result to an output file
This is how I would count the non-commented lines for multiple files, but I don't know how to store the result together with the filename in an output.txt file.
for file in *txt
do
cat "$file" | sed '/^\s*#/d' | wc -l
done
You can write several things per line, and you can redirect the output of the whole loop to a file:
for file in *txt
do
echo -n $file' '
cat "$file" | sed '/^\s*#/d' | wc -l
done > output.txt
Also you can shorten the file processing down to:
egrep -v '^\s*#' "$file" | wc -l
for file in *txt
do
cat "$file" | sed '/^\s*#/d' | wc -l >> output.txt
done

append output of each iteration of a loop to the same in bash

I have 44 files (2 for each chromosome) divided in two types: .vcf and .filtered.vcf.
I would like to make a wc -l for each of them in a loop and append the output always to the same file. However, I would like to have 3 columns in this file: chr[1-22], wc -l of .vcf and wc -l of .filtered.vcf.
I've been trying to do independent wc -l for each file and paste together columnwise the 2 outputs for each of the chromosomes, but this is obviously not very efficient, because I'm generating a lot of unnecessary files. I'm trying this code for the 22 pairs of files:
wc -l file1.vcf | cut -f 1 > out1.vcf
wc -l file1.filtered.vcf | cut -f 1 > out1.filtered.vcf
paste -d "\t" out1.vcf out1.filtered.vcf
I would like to have just one output file containing three columns:
Chromosome VCFCount FilteredVCFCount
chr1 out1 out1.filtered
chr2 out2 out2.filtered
Any help will be appreciated, thank you very much in advance :)
printf "%s\n" *.filtered.vcf |
cut -d. -f1 |
sort |
xargs -n1 sh -c 'printf "%s\t%s\t%s\n" "$1" "$(wc -l <"${1}.vcf")" "$(wc -l <"${1}.filtered.vcf")"' --
Output newline separated list of files in the directory
Remove the extension with cut (probably something along xargs -i basename {} .filtered.vcf would be safer)
Sort it (for nice sorted output!) (probably something along sort -tr -k2 -n would sort numerically and would be even better).
xargs -n1 For each one file execute the script sh -c
printf "%s\t%s\t%s\n" - output with custom format string ...
"$1" - the filename and...
"(wc -l <"${1}.vcf")" - the count the lines in .vcf file and...
"$(wc -l <"${1}.filtered.vcf")" - the count of the lines in the .filtered.vcf
Example:
> touch chr{1..3}{,.filtered}.vcf
> echo > chr1.filtered.vcf ; echo > chr2.vcf ;
> printf "%s\n" *.filtered.vcf |
> cut -d. -f1 |
> sort |
> xargs -n1 sh -c 'printf "%s\t%s\t%s\n" "$1" "$(wc -l <"${1}.filtered.vcf")" "$(wc -l <"${1}.vcf")"' --
chr1 0 1
chr2 1 0
chr3 0 0
To have nice looking table with headers, use column:
> .... | column -N Chromosome,VCFCount,FilteredVCFCount -t -o ' '
Chromosome VCFCount FilteredVCFCount
chr1 0 1
chr2 1 0
chr3 0 0
Maybe try this.
for chr in chr*.vcf; do
base=${chr%.vcf}
awk -v base="$base" 'BEGIN { OFS="\t"
# Remove this to not have this pesky header line
print "Chromosome", "VCFCount", "FilteredVCFCount"
}
FNR==1 && n { p=n }
{ n=FNR }
END { print base, p, n }' "$chr" "$base.filtered.vcf"
done >counts.txt
The very simple Awk script just collects the highest line number for each file (so we basically reimplement wc -l) and prints the collected numbers in the desired format. FNR is the line number in the current input file; we simply save this, and copy the value to p to keep the saved value from the previous file in a separate variable when we switch to a new file (starting over at line number 1).
The shell parameter substitution ${variable%pattern} retrieves the value of variable with any suffix match on pattern removed. (There is also ${variable#pattern} to remove a prefix, and Bash has ## and %% to trim the longest pattern match instead of the shortest.)
If efficiency is important, you could probably refactor all of the script into a single Awk script, but this way, all the pieces are simple and hopefully understandable.

KSH Shell script - Process file by blocks of lines

I am trying to write a bash script in a KSH environment that would iterate through a source text file and process it by blocks of lines
So far I have come up with this code, although it seems to go indefinitely since the tail command does not return 0 lines if asked to retrieve lines beyond those in the source text file
i=1
while [[ `wc -l /path/to/block.file | awk -F' ' '{print $1}'` -gt $((i * 1000)) ]]
do
lc=$((i * 1000))
DA=ProcessingResult_$i.csv
head -$lc /path/to/source.file | tail -1000 > /path/to/block.file
cd /path/to/processing/batch
./process.sh #This will process /path/to/block.file
mv /output/directory/ProcessingResult.csv /output/directory/$DA
i=$((i + 1))
done
Before launching the above script I perform a manual 'first injection': head -$lc /path/to/source.file | tail -1000 > /path/to/temp.source.file
Any idea on how to get the script to stop after processing the last lines from the source file?
Thanks in advance to you all
If you do not want to create so many temporary files up front before beginning to process each block, you could try the below solution. It can save lot of space when processing huge files.
#!/usr/bin/ksh
range=$1
file=$2
b=0; e=0; seq=1
while true
do
b=$((e+1)); e=$((range*seq));
sed -n ${b},${e}p $file > ${file}.temp
[ $(wc -l ${file}.temp | cut -d " " -f 1) -eq 0 ] && break
## process the ${file}.temp as per your need ##
((seq++))
done
The above code generates only one temporary file at a time.
You could pass the range(block size) and the filename as command line args to the script.
example: extractblock.sh 1000 inputfile.txt
have a look to man split
NAME
split - split a file into pieces
SYNOPSIS
split [OPTION]... [INPUT [PREFIX]]
-l, --lines=NUMBER
put NUMBER lines per output file
For example
split -l 1000 source.file
Or to extract the 3rd chunk for example (1000 here is not the number of lines , it is the number of chunks, or a chunk is 1/1000 of source.file)
split -nl/3/1000 source.file
A note on condition :
[[ `wc -l /path/to/block.file | awk -F' ' '{print $1}'` -gt $((i * 1000)) ]]
Maybe it should be source.file instead of block.file, and it is quite inefficient on a big file because it will read (count the lines of the file) for each iteration ; number of lines can be stored in a variable, also using wc on standard input prevents from using awk:
nb_lines=$(wc -l </path/to/source.file )
With Nahuel's recommendation I was able to build the script like this:
i=1
cd /path/to/sourcefile/
split source.file -l 1000 SF
for sf in /path/to/sourcefile/SF*
do
DA=ProcessingResult_$i.csv
cd /path/to/sourcefile/
cat $sf > /path/to/block.file
rm $sf
cd /path/to/processing/batch
./process.sh #This will process /path/to/block.file
mv /output/directory/ProcessingResult.csv /output/directory/$DA
i=$((i + 1))
done
This worked great

Resources