Preventing input function from generating files not present in sample file - bioinformatics

I've been working on a snakemake problem I've been unable to solve. Given a file of samples such as:
tissue type replicate file
ear rep1 H3K4me3 00.data/chip_seq/H3K4me3/ear_H3K4me3_rep1.fastq
ear rep2 H3K4me3 00.data/chip_seq/H3K4me3/ear_H3K4me3_rep2.fastq
ear rep1 input 00.data/chip_seq/input/ear_input_rep1.fastq
ear rep2 input 00.data/chip_seq/input/ear_input_rep2.fastq
leaf rep1 H3K4me3 00.data/chip_seq/H3K4me3/ear_H3K4me3_rep1.fastq
leaf rep2 H3K4me3 00.data/chip_seq/H3K4me3/ear_H3K4me3_rep2.fastq
leaf rep1 input 00.data/chip_seq/input/ear_input_rep1.fastq
leaf rep2 input 00.data/chip_seq/input/ear_input_rep2.fastq
root rep1 input 00.data/chip_seq/input/ear_input_rep1.fastq
root rep2 input 00.data/chip_seq/input/ear_input_rep2.fastq
The snakemake function I utilize to input this list of files - here called get_chip_mods generates combinations of wildcards that do not actually exist. So in this case get_chip_mods generates combinations such as root_rep1_H3K4me3 even though said file is not specified in the samples. Is there a way to prevent this function from generating combinations that are not present within the samples file?
Below is the beginning of my pipeline.
#Load Samples from the CSV file - index the important ones
samples = pd.read_csv(config["samples"], sep=' ').set_index(["tissue", "type", "replicate"], drop=False)
samples.index = samples.index.set_levels([i.astype(str) for i in samples.index.levels]) # enforce str in index
rule all:
input:
¦ "00.data/reference/bowtie_idx.1.bt2",
¦ expand("00.data/trimmed_chip/{tissue}_{chip}_{replicate}_trimmed.fq" , tissue = samples["tissue"],
chip = samples["type"], replicate = samples["replicate"]),
#This is where I believe I've been hitting issues.
def get_chip_mods(wildcards):
final_list = samples.loc[(wildcards.tissue, wildcards.type, wildcards.replicate), ["file"]].dropna()
print(final_list)
return final_list
rule trim_reads:
input:
¦ get_chip_mods
params:
¦ "00.data/trimmed_chip/log_files/{tissue}_{type}_{replicate}.log"
output:
¦ "00.data/trimmed_chip/{tissue}_{type}_{replicate}_trimmed.fq"
threads: 5
message:"""Trimming"""
shell:
¦ """
¦ java -jar /usr/local/apps/eb/Trimmomatic/0.36-Java-1.8.0_144/trimmomatic-0.36.jar \
¦ SE -threads {threads} -phred33 {input} {output} \
¦ ILLUMINACLIP:/scratch/jpm73279/04.lncRNA/02.Analysis/23.generate_all_metaplots/00.data/adapter.fa:2:30:10 \
¦ LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36
¦ """
The error I receive is as follows
KeyError:
Wildcards:
tissue=root
type=H3K4me3
replicate=rep1

The error has to do with the expand function in the rule all. The function will by default will use the python itertools product to generate all possible combinations of your wildcards. Some of these combinations do not exist in your dataframe index and thus the error.
expand however allows you to customize the way in which the wildcards are combined and thus you can rewrite the function in the following manner to resolve the issue.
expand("00.data/trimmed_chip/{tissue}_{chip}_{replicate}_trimmed.fq".split(), zip, tissue = samples["tissue"], chip = samples["type"], replicate = samples["replicate"])
Source

The only key for solving your ambiguity is the file with the possible combinations. That means that your script shall be written in a way that doesn't depend on which combinations are possible.
One way to solve that is to replace your three wildcards in the all rule with a single wildcard {tissue_type_replicate} and produce the possible values using a python function. That would give Snakemake the information of which files it needs to produce. You can make the same change in other rules too (that is the simplest and workable solution as long as you don't need exact values of {tissue} {type} and {replicate} in the shell section). Anyway you still may leave the {tissue} {type} and {replicate} wildcards untouched in other rules: Snakemake should find the match.

Related

How to loop over multiple folders to concatenate FastQ files?

I have received multiple fastq.gz files from Illumina Sequencing for 100 samples. But all the fastq.gz files for the respective samples are in separate folders according to the sample ID. Moreover, I have multiple (8-16) R1.fastq.gz and R2.fastq.gz files for one sample. So, I used the following code for concatenating all the R1.fastq.gz and R2.fastq.gz into a single R1.fastq.gz and R2.fastq.gz.
cat V350043117_L04_some_digits-525_1.fq.gz V350043117_L04_some_digits-525_1.fq.gz V350043117_L04_some_digits-525_1.fq.gz > sample_R1.fq.gz
So in the sequencing file, the structure is like the above in the code. For each sample, the string with V has different number then L with different number and then another string of digits before the _1 and _2. For each sample, the numbers keep changing.
My questing is, how can I create a loop that will go over all the folders at once taking the different file numbering of sequence files into consideration for concatenating the multiple fq.gz files and combine them into a single R1 and R2 file?
Surely, I cannot just concatenate one by one by going into each sample folder.
Please give some helpful tips. Thank you.
The folder structure is the following:
/data/Sample_1/....._525_1_fq.gz /....._525_2_fq.gz /....._526_1_fq.gz /....._526_2_fq.gz
/data/Sample_2/....._580_1_fq.gz /....._580_2_fq.gz /....._589_1_fq.gz /....._589_2_fq.gz
/data/Sample_3/....._690_1_fq.gz /....._690_2_fq.gz /....._645_1_fq.gz /....._645_2_fq.gz
Below I have attached a screenshot of the folder structure.
Folder structure
Based on the provided file structure, would you please try:
#!/bin/bash
for d in Raw2/C*/; do
(
cd "$d"
id=${d%/}; id=${id##*/} # extract ID from the directory name
cat V*_1.fq.gz > "${id}_R1.fq.gz"
cat V*_2.fq.gz > "${id}_R2.fq.gz"
)
done
The syntax for d in Raw2/C*/ loops over the subdirectories starting with C.
The parentheses make the inner commands executed in a subshell so we don't have to care about returning from cd "$d" (at the expense of small extra execution time).
The variable id is assigned to the ID extracted from the directory name.
cat V*_1.fq.gz, for example, will be expanded as V350028825_L04_581_1.fq.gz V350028825_L04_582_1.fq.gz V350028825_L04_583_1.fq.gz ... according to the files in the directory and are concatenated into ${id}_R1.fastq.gz. Same for ${id}_R2.fastq.gz.

Temp file not being deleted

I'm trying to create a temporary file in my pipeline, then use that file in another rule.
For example, I have two rules in a .smk file:
#Unzip adapter trimmed fastq file
rule unzip_fastq:
input:
'{sample}.adapterTrim.round2.fastq.gz',
output:
temp('{sample}.adapterTrim.round2.fastq')
conda:
'../envs/rep_element.yaml'
shell:
'gunzip -c {input[0]} > {output[0]}'
#Run bowtie2 to align to rep elements and parse output
rule parse_bowtie2_output_realtime:
input:
'{sample}.adapterTrim.round2.fastq'
output:
'rep_element_pipeline/{sample}.fastq.gz.mapped_vs_' + config["ref"]["bt2_index"] + '.sam'
params:
bt2=config["ref"]["bt2_index_path"], eid=config["ref"]["enst2id"]
conda:
'../envs/rep_element.yaml'
shell:
'perl ../scripts/parse_bowtie2_output_realtime_includemultifamily.pl '
'{input[0]} {params.bt2} {output[0]} {params.eid}'
{sample}.adapterTrim.round2.fastq is used once and should ultimately be deleted upon completion. However, I'm finding that this file is uploaded to Amazon S3, even with the addition of temp(). I'm also finding that this file is removed locally, but still persists on S3.
Am I doing this correctly? '{sample}.adapterTrim.round2.fastq' is not currently written in the rule-all of the Snakefile.
We ultimately need to prevent this file from being uploaded to S3, so if there is a way to specify not to upload this file in the rule, that would be useful.
It seems that the snippet in the question is not consistent with actual use, since for S3 files one would need to wrap file names in remote.
However, as a general solution, documentation contains the following:
The remote() wrapper is mutually-exclusive with the temp() and protected() wrappers.
Hence, if you intend to use a temp file, make sure it's not wrapped in remote, or explicitly wrap the file in local.

When compressing files (zip, tar, ect...) in SSH what determines the 'sort order' in which files are compressed?

Consider the following command run on a folder with 2TB of recursive folders and files in it.
tar -cvzf _backup.tar.gz /home/wwwsite/public_html
Consider that the folder being compressed is full of sub-folders (with hundreds of sub folders and files in them) and a naming convention that is random, but sequential, short example:
/17688EE/
/18503HH/
/19600SL/
/20659CS/
Consider that there are 10,000+ folders between each block (17000 block, then 18000 block, ect...). Naming Convention: Number 00000 + Letter A-Z, (ie: 17000AZ-17000ZA) so the folders can easily be sorted by name.
Consider that the tar command is being run in a screen with verbose output in order to check on the "progress" of that command.
screen -S compress
In theory, I had assumed I could simply look at the output of that screen, but I notice that the TARBALL does not seem to be compressing the folder in either the order they were created, nor sort them based on the name of the folder.
Therefore my question is two fold:
Other than looking at the verbose output of the TARBALL and guessing;
Is there any where to find out how long the compression process will take to complete? (such as adding a -tack command onto the TAR to show estimated time to completion, something similar to the % complete of an SCP command)
In what order does the TAR command decide to compress the folders? ( and is there a way to tell the command to "sort by" date/name during compression?)
To elaborate, after 20 min of waiting for the 17001AA-to-AZ block to compress I would figure next up would be the 17001BA-to-BZ block, but this is not the case, the verbose output shows what seem to be randomly grabbing folders without sorting by name nor date)
Simply put: What determines the sort order during compression?
If you give tar a list of directory names, the order of the entries in the tar file will match the order that readdir returns filenames from the filesystem. The fact that you are compressing the tar file has no bearing on the order.
Here is a quick example to illustrate what happens on a Linux ext4 filesystem. Other filesystems may behave differently.
First create a new directory with three files, a1, a2 and a3
$ mkdir fred
$ cd fred
$ touch a1 a2 a3
Now lets see the order that readdir returns the files. The -U option will make ls return the filenames unsorted in the order they are stored in the directory.
$ ls -U
a3 a1 a2
As you can see, on my Linux setup the files are returned in an apparently random order.
Now stick the files in a tar file. Note I'm giving tar a directory name for the input file ("." in this instance) to make sure it has to call readdir behind the scenes.
$ tar cf xxx.tar .
And finally, lets see the order that tar has stored the files.
$ tar tf xxx.tar
./
./a3
./a1
./a2
The order of the files a1, a2 and a3 matches the order that readdir returned the filenames from the filesystem. The . filename is present because it was explicitly included on the command line passed to tar.
If you want to force an order you will have to give tar a sorted list of filenames. The example below shows how to get tar to read the list of filenames from stdin, using the -T - command line option.
$ ls a* | tar cvf yyy.tar -T -
a1
a2
a3
In this toy example the list of filenames will be automatically sorted because the shell sorts the filenames that match the wildcard a*.
And just to confirm, this is what is in the tar file.
$ tar tf yyy.tar
a1
a2
a3
In your use-case a combination of the find and sort commands piped into tar should allow you to create a sorted tar file with as many entries as you like.
Something like this as a starting point.
find | sort | tar -cvzf _backup.tar.gz -T -

How to find duplicate directories

Let create some testing directory tree:
#!/bin/bash
top="./testdir"
[[ -e "$top" ]] && { echo "$top already exists!" >&2; exit 1; }
mkfile() { printf "%s\n" $(basename "$1") > "$1"; }
mkdir -p "$top"/d1/d1{1,2}
mkdir -p "$top"/d2/d1some/d12copy
mkfile "$top/d1/d12/a"
mkfile "$top/d1/d12/b"
mkfile "$top/d2/d1some/d12copy/a"
mkfile "$top/d2/d1some/d12copy/b"
mkfile "$top/d2/x"
mkfile "$top/z"
The structure is: find testdir \( -type d -printf "%p/\n" , -type f -print \)
testdir/
testdir/d1/
testdir/d1/d11/
testdir/d1/d12/
testdir/d1/d12/a
testdir/d1/d12/b
testdir/d2/
testdir/d2/d1some/
testdir/d2/d1some/d12copy/
testdir/d2/d1some/d12copy/a
testdir/d2/d1some/d12copy/b
testdir/d2/x
testdir/z
I need find the duplicate directories, but I need consider only files (e.g. I should ignore (sub)directories without files). So, from the above test-tree the wanted result is:
duplicate directories:
testdir/d1
testdir/d2/d1some
because in both (sub)trees are only two identical files a and b. (and several directories, without files).
Of course, I could md5deep -Zr ., also could walk the whole tree using perl script (using File::Find+Digest::MD5 or using Path::Tiny or like.) and calculate the file's md5-digests, but this doesn't helps for finding the duplicate directories... :(
Any idea how to do this? Honestly, I haven't any idea.
EDIT
I don't need working code. (I'm able to code myself)
I "just" need some ideas "how to approach" the solution of the problem. :)
Edit2
The rationale behind - why need this: I have approx 2.5 TB data copied from many external HDD's as a result of wrong backup-strategy. E.g. over the years, the whole $HOME dirs are copied into (many different) external HDD's. Many sub-directories has the same content, but they're in different paths. So, now I trying to eliminate the same-content directories.
And I need do this by directories, because here are directories, which has some duplicates files, but not all. Let say:
/some/path/project1/a
/some/path/project1/b
and
/some/path/project2/a
/some/path/project2/x
e.g. the a is a duplicate file (not only the name, but by the content too) - but it is needed for the both projects. So i want keep the a in both directories - even if they're duplicate files. Therefore me looking for a "logic" how to find duplicate directories.
Some key points:
If I understand right (from your comment, where you said: "(Also, when me saying identical files I mean identical by their content, not by their name)" , you want find duplicate directories, e.g. where their content is exactly the same as in some other directory, regardless of the file-names.
for this you must calculate some checksum or digest for the files. Identical digest = identical file. (with great probability). :) As you already said, the md5deep -Zr -of /top/dir is a good starting point.
I added the -of, because for such job you don't want calculate the contents of the symlinks-targets, or other special files like fifo - just plain files.
calculating the md5 for each file in 2.5TB tree, sure will take few hours of work, unless you have very fast machine. The md5deep runs a thread for each cpu-core. So, while it runs, you can make some scripts.
Also, consider run the md5deep as sudo, because it could be frustrating if after a long run-time you will get some error-messages about unreadable files, only because you forgot to change the files-ownerships...(Just a note) :) :)
For the "how to":
For comparing "directories" you need calculate some "directory-digest", for easy compare and finding duplicates.
The one most important thing is realize the following key points:
you could exclude directories, where are files with unique digests. If the file is unique, e.g. has not any duplicates, that's mean that is pointless checking it's directory. Unique file in some directory means, that the directory is unique too. So, the script should ignore every directory where are files with unique MD5 digests (from the md5deep's output.)
You don't need calculate the "directory-digest" from the files itself. (as you trying it in your followup question). It is enough to calculate the "directory digest" using the already calculated md5 for the files, just must ensure that you sort them first!
e.g. for example if your directory /path/to/some containing only two files a and b and
if file "a" has md5 : 0cc175b9c0f1b6a831c399e269772661
and file "b" has md5: 92eb5ffee6ae2fec3ad71c777531578f
you can calculate the "directory-digest" from the above file-digests, e.g. using the Digest::MD5 you could do:
perl -MDigest::MD5=md5_hex -E 'say md5_hex(sort qw( 92eb5ffee6ae2fec3ad71c777531578f 0cc175b9c0f1b6a831c399e269772661))'
and will get 3bc22fb7aaebe9c8c5d7de312b876bb8 as your "directory-digest". The sort is crucial(!) here, because the same command, but without the sort:
perl -MDigest::MD5=md5_hex -E 'say md5_hex(qw( 92eb5ffee6ae2fec3ad71c777531578f 0cc175b9c0f1b6a831c399e269772661))'
produces: 3a13f2408f269db87ef0110a90e168ae.
Note, even if the above digests aren't the digests of your files, but they're will be unique for every directory with different files and will be the same for the identical files. (because identical files, has identical md5 file-digest). The sorting ensures, that you will calculate the digest always in the same order, e.g. if some other directory will contain two files
file "aaa" has md5 : 92eb5ffee6ae2fec3ad71c777531578f
file "bbb" has md5 : 0cc175b9c0f1b6a831c399e269772661
using the above sort and md5 you will again get: 3bc22fb7aaebe9c8c5d7de312b876bb8 - e.g. the directory containing same files as above...
So, in such way you can calculate some "directory-digest" for every directory you have and could be ensured that if you get another directory digest 3bc22fb7aaebe9c8c5d7de312b876bb8 thats means: this directory has exactly the above two files a and b (even if their names are different).
This method is fast, because you will calculate the "directory-digests" only from small 32bytes strings, so you avoids excessive multiple file-digest-caclulations.
The final part is easy now. Your final data should be in form:
3a13f2408f269db87ef0110a90e168ae /some/directory
16ea2389b5e62bc66b873e27072b0d20 /another/directory
3a13f2408f269db87ef0110a90e168ae /path/to/other/directory
so, from this is easy to get: the
/some/directory and the /path/to/other/directory are identical, because they has identical "directory-digests".
Hm... All the above is only a few lines long perl script. Probably would be faster to write here directly the perl-script as the above long textual answer - but, you said - you don't want code... :) :)
A traversal can identify directories which are duplicates in the sense you describe. I take it that this is: if all files in a directory are equal to all files of another then their paths are duplicates.
Find all files in each directory and form a string with their names. You can concatenate the names with a comma, say (or some other sequence that is certainly not in any names). This is to be compared. Prepend the path to this string, so to identify directories.
Comparison can be done for instance by populating a hash with keys being strings with filenames and path their values. Once you find that a key already exists you can check the content of files, and add the path to the list of duplicates.
The strings with path don't have to be actually formed, as you can build the hash and dupes list during the traversal. Having the full list first allows for other kinds of accounting, if desired.
This is altogether very little code to write.
An example. Let's say that you have
dir1/subdir1/{a,b} # duplicates (files 'a' and 'b' are considered equal)
dir2/subdir2/{a,b}
and
proj1/subproj1/{a,b,X} # NOT duplicates, since there are different files
proj2/subproj2/{a,b,Y}
The above prescription would give you strings
'dir1/subdir1/a,b',
'dir2/subdir2/a,b',
'proj1/subproj1/a,b,X',
'proj2/subproj2/a,b,Y';
where the (sub)string 'a,b' identifies dir1/subdir1 and dir2/subdir2 as duplicates.
I don't see how you can avoid a traversal to build a system that accounts for all files.
The procedure above is the first step, not handling directories with files and subdirectories.
Consider
dirA/ dirB/
a b sdA/ a X sdB/
c d c d
Here the paths dirA/sdA/ and dirB/sdB/ are duplicates by the problem description but the whole dirA/ and dirB/ are distinct. This isn't shown in the question but I'd expect it to be of interest.
The procedure from the first part can be modified for this. Iterate through directories, forming a path component at every step. Get all files in each, and all subdirectories (if none we are done). Append the comma-separated file list to the path component (/sdA/). So the representation of the above is
'dirA/sdA,a,b/c,d', 'dirB/sdB,a,X/c,d'
For each file-list substring (c,d) found to already exist we can check its path against the existing one, component by component. Now a hash with keys like c,d won't do since this example has the same file-list for distinct hierarchies, but a modified (or other) data structure is needed.
Finally, there may be more subdirectories parallel to sdA (say sdA2). We care only for its own path, but except for the parallel files (a,b, in that component of the path dirA/sdaA2,a,b/). So keep in mind all bottom-level file-lists (c,d) with their paths and, if file-lists are equal and paths are of same length, check whether their paths have a,b file-lists equal in each path component.
I don't know whether this is a workable solution for you, but I'd expect "near-duplicates" to be rare -- the backup is either a duplicate or not. So there may not be much need to handle futher edge-cases in complex sprawling hierarchies. This procedure should be at least a useful pre-selection mechanism, that would greatly reduce the need for further work.
This assumes that equal file-names very likely indicate equal files. A part of that is my expectation that if a file was even just renamed it still cannot be considered a duplicate. If this is not so this approach won't work and one would need something along the lines of the answer by jm666.
I make a tool which searches duplicate folders.
https://github.com/un1t/dirdups
dirdups testdir -i 1
-i 1 option consider folders as duplicates if they have at least 1 file in common. Without this option default value is 10.
In your case it will find the following directories:
testdir/d1/d12/
testdir/d2/d1some/d12copy/

Running Word Count or Pig Script on a Directory to produce result in separate files

I am new to Hadoop/Pig.
I have a directory which has several files. Now I need to run a word count on those. I can use the Hadoop sample example wordcount and run it on the directory to get the output, but the output will be in a single file. What should I do if I want the output of each file should be in a different file?
I can use Pig too. And give the directory as input to pig. However how can I read the file names inside the Directory and then give it to the LOAD?
What I meant is:
Say I have a directory Test which has 5 files test1, test2, test3, test4, test5. Now I want the word count of each file separately in a separate file. I know I can provide individual names and do it, but that would take a lot of time.
Is it possible that I can read filenames from the directory and provide them as input to LOAD of pig?
If you're using Pig version 0.10.0 or later, you can take advantage of a combination of source tagging and MultiStorage to keep track of the files.
For example, if you had an input directory pigin with files and content as the following:
pigin
|-test1 => "hello"
|-test2 => "world"
|-test3 => "Apache"
|-test4 => "Hadoop"
|-test5 => "Pig"
The following script will read each script and write the contents of each file to a different directory.
%declare inputPath 'pigin'
%declare outputPath 'pigout'
-- Define MultiStorage to write output to different directories based on the
-- first element in the tuple
define MultiStorage org.apache.pig.piggybank.storage.MultiStorage('$outputPath','0');
-- Load the input files, prepending each tuple with the file name
A = load '$inputPath' using PigStorage(',', '-tagsource');
-- Write output to different directories
store A into '$outputPath' using MultiStorage();
The above script will create an output directory tree that looks like the following:
pigout
|-test1
| `-test1-0 => "test1 hello"
|-test2
| `-test2-0 => "test2 world"
|-test3
| `-test3-0 => "test3 Apache"
|-test4
| `-test4-0 => "test4 Hadoop"
|-test5
| `-test5-0 => "test5 Pig"
The -0 at the end of the filenames correspond to the reducers that produced the output. If you have more than one reducer, you may see more than one file per directory.
You could extend the PigStorage code to add the file name to the tuple, see Code Sample look for question "Q: I load data from a directory which contains different file. How do I find out where the data comes from?". For the output you could do similar extension of the PigStorage to write into different output files.

Resources