I want to have a for loop to check the files in my current directory and 2 specific "sub-sub-directories" called folder1 and folder2. Instead of going:
for file in *
do stuff
done
for file in "./sub_dir/folder1"/*
do stuff
done
for file in "./sub_dir/folder2"/*
do stuff
done
is there a way to do all this with one for loop? Something along the lines of
for file in * || "./sub_dir/folder1"/* || "./sub_dir/folder2"/*
do stuff
done
Related
I have files in directories like:
./PBMCs/SRR1_1.fastq
./PBMCs/SRR1_2.fastq
./Monos/SRR2.fastq
./Monos/SRR3.fastq
I want to change the SRR# to a more informative name based on a file of key-value pairs:
SRR1 pbmc-1
SRR2 mono-1
SRR3 mono-2
And rename the files as:
./PBMCs/pbmc-1_1.fastq
./PBMCs/pbmc-1_2.fastq
./Monos/mono-1.fastq
./Monos/mono-2.fastq
All that I can think to do is loop through the list of original files and then loop through the lines of the name-change.txt file and replace the strings. However, I'm not sure how to implement this or if it's a good way to approach this.
Assuming all *.fastq are one subdirectory deep, this should work fine:
while read old new; do
for fastq in ./*/"$old"*.fastq; do
new_name=$new${fastq##*/"$old"}
echo mv "$fastq" "${fastq%/*}/$new_name"
done
done <name-change.txt
Remove echo if the output looks good.
My bash script retrieves a list of folders, and stores them under a projectlist parameter.
Later, I want to loop through each folder, using the project name as a variable in the file path. Each project folder contains a ProjectFile.txt file, from which I want to append the contents to a common .csv file.
projectlist=$(./some/project/filepath/projects)
echo ${projectlist[#]}
while read project;
do
while read line;
do
echo "$project" "$line" >> /data/allData.csv
done < ./some/project/filepath/projects/"$project"/ProjectFile.txt
done < $projectlist
What it currently ends up doing however, is printing each individual element in the projectlist, without writing anything to the .csv file. Any idea where the issue lies?
I have a script that copies 1000 files to another folder. However, I have files that end with the following:
*_LINKED_1.trees
*_LINKED_2.trees
*_LINKED_3.trees
.
.
.
*_LINKED_10.trees
'*' is not part of the name but there's some string in place of it.
What I want is to copy 1000 files for each of the types I've mentioned above in bullet points in a smart way.
#!/bin/bash
for entry in /home/noor/popGen/sweeps/hard/final/*
do
for i in {1..1000}
do
cp $entry /home/noor/popGen/sweeps/hard/sample/hard
done
done
Could there be a smart way to copy all 1000 files for each type? One way would be to have an if statement but I'll have to change that if- statement 10 times.
This script below will do the required task.
file_count=0
for i in {1..10};
do
for j in source/*_LINKED_$i.trees;
do
file_count=$((file_count+1))
echo "cp $j destination/"
if ((file_count>1000));
then
file_count=0
break;
fi;
done
done
Outer loop for i in {1..10} is used to mark the type of the files (*_LINKED_$i.trees).
Inner loop will iterate through all the files of each type (eg: *_LINKED_1.trees, *_LINKED_2.trees etc till *_LINKED_10.trees). Then it copies the first 1000 files (set using file_count=1000) into the destination for that particular type of file.
I am having list of xml files in a folder like - data0.xml, data1.xml, data2.xml, ...data99.xml
I have to read the contents of these files for further processing. Currently I am using for loop like below
for xmlentry in `ls -v *.xml
do
execute_loop $xmlemtry
done
This is executing fine for all xml's file in sequence ,
But I wanted to know if I want to force FOR loop to start from data10.xml and proceed till data99.xml
For loop shoud start from data10.xml, data11.xml .... data99.xml
How to do something like this in shell scripting, better if I could
control the start of loop with a variable
You can construct the name of the files and loop through them. In you specific example, something like this could work:
first=10
last=99
for i in $(seq "$first" "$last")
do
xmlfile="data${i}.xml"
execute_loop "$xmlfile"
done
I have 3 folders containing files as follow:
Folder1 contains only 1 file called "data".
Folder2 contains more than a hundred files that their names start with "part1" with the same text structure.
Folder3 contains more than a hundred files that their names start with "part2" with the same text structure.
I've created a program using AWK that takes as input the file from folder1, only 1 file from folder2 and only 1 file from folder3, and it works well.
Now i want to give the program all the files from all the folders as input, therefore, i need a test method to know that the program has finished from the first 2 files (part1* + part2*) and will start to process the next ones, in order to reset all the variables and arrays for the new processing.
The program will be run like this:
$ awkprogram folder1/data folder2/part1* folder3/part2*
Something like this maybe?
FNR==1 { # for every first record of every file
filecounter++ # count how manyth file is being processed
}
FNR==1 && filecounter > 2 { # once two first files has been processed
# reset variables # do whatever
}