mapping reads using snakemake - yaml

I am trying to run hisat2 mapping using the snakemake.
Basically, I'm using a config.yaml file like this:
reads:
set1: /path/to/set1/samplelist.tab
hisat2:
database: genome
genome: genome.fa
nodes: 2
memory: 8G
arguments: --dta
executables:
hisat2: /Tools/hisat2-2.1.0/hisat2
samtools: /Tools/samtools-1.3/samtools
Then Snakefile:
configfile: "config.yaml"
workdir: "/path/to/working_dir/"
# Hisat2
rule hisat2:
input:
reads = lambda wildcards: config["reads"][wildcards.sample]
output:
bam = "{sample}/{sample}.bam"
params:
idx=config["hisat2"]["database"],
executable = config["executables"]["hisat2"],
nodes = config["hisat2"]["nodes"],
memory = config["hisat2"]["memory"],
executable2 = config["executables"]["samtools"]
run:
shell("{params.executable} --dta -p {params.nodes} -x {params.idx} {input.reads} |"
"{params.executable2} view -Sbh -o {output.bam} -")
# all
rule all:
input:
lambda wildcards: [sample + "/" + sample + ".bam"
for sample in config["reads"].keys()]
My samplelist.tab is like this:
id reads1 reads2
set1a set1a_R1.fastq.gz set1a_R2.fastq.gz
set1b set1b_R1.fastq.gz set1b_R2.fastq.gz
Any hints how to make this working? I apoligize for a messy script, just started using snakemake.

You will have to do something like this:
import pandas as pd
reads = pd.read_csv(config["reads"]['set1'], sep='\t', index_col=0)
def get_fastq(wildcards):
return list(reads.loc[wildcards.sample].values)
rule hisat2:
input:
get_fastq
...
First you will need to load the samplelist and store this (I did it as a pandas dataframe). Then you can lookup which files belong to that sample name.
Edit:
Rewriting the code to look like this is much more readable (in my opinion).
rule hisat2:
input:
[{sample}_R1.fastq.gz,
{sample}_R2.fastq.gz]
...

Related

snakemake program not running all functions?

configfile: "config.yaml"
DATA = config['DATA_DIR']
BIN = config['BIN']
JASPAR = config['DATA_DIR']
RESULTS = config['RESULTS']
# JASPAR = "{0}/JASPAR2020".format(DATA)
JASPARS, ASSEMBLIES, BATCHES, TFS, BEDS = glob_wildcards(os.path.join(DATA, "{jaspar}", "{assembly}", "TFs", "{batch}", "{tf}", "{bed}.bed"))
rule all:
input:
expand (os.path.join(RESULTS, "{jaspar}", "{assembly}", "LOLA_dbs", "JASPAR2020_LOLA_{batch}.RDS"), jaspar = JASPARS, assembly = ASSEMBLIES, batch = BATCHES)
rule createdb:
input:
files = expand(os.path.join(RESULTS, "{jaspar}", "{assembly}", "data", "{batch}", "{tf}", "regions", "{bed}.bed"), zip, jaspar = JASPARS, assembly = ASSEMBLIES, batch = BATCHES, tf = TFS, bed = BEDS)
output:
os.path.join(RESULTS, "{jaspar}", "{assembly}", "LOLA_dbs", "JASPAR2020_LOLA_{batch}.RDS")
shell:
"""
R --vanilla --slave --silent -f {BIN}/create_lola_db.R \
--args {RESULTS}/{wildcards.jaspar}/{wildcards.assembly}/data/{wildcards.batch} \
{output};
"""
Why my snakemake program is not considering "createdb" rule. It is only considering "all". Can someone please help me with this?
"snakemake only runs the first rule in a Snakefile, by default." - SOURCE
If that first rule's input isn't clearly linked to createdb, it's not going to know to run createdb. Because the only rule it wants to run would have to depend on createdb.
I suspect your problem is the input to your rule createdb shouldn't have expand() used for all the wildcards already handled by rule all, just bed?. See here.
Also other avenues to consider for debugging is investigating what you unpacked glob_wildcards() to and what files is.

snakemake 6.0.5: Input a list of folders and multiple files from each folder (to merge Lanes)

Good day. I have some directories (shown in bold below) each having some .fastq files for different lanes.
CND1/ UD_LOO3_R1.fastq.gz UD_LOO4_R1.fastq.gz
CND2/ XD_L001_R1.fastq.gz XD_L004_R1.fastq.gz
Inside each directory, i want to create a merged fastq file that will be named as : sample_R1.fastq.gz. For instance, CND1/UD_R1.fastq.gz and CND2/XD_R1.fastq.gz etc and so on. To this end, i created the following snakemake workflow.
from collections import defaultdict
dirs,samp,lane = glob_wildcards("dir}/{sample}_L{lane}_R1.fastq.gz")
dirs, fls = glob_wildcards("{dir}/{files}_R1.fastq.gz")
D = defaultdict(list)
for x,y in zip(dirs,fls):
D[x].append(y+'_R1.fastq.gz')
rule ML:
message:
"Merge all Lanes for Fragment R1"
input:
expand( "{dir}/{files}",zip,dir=D.keys(),files=D.values() )
output:
expand( "{dir}/{s}_R1.fastq.gz",zip,dir=dirs,s=set(samp) )
shell:
"echo {input} && echo {output} "
#"zcat {input} >> {output}"
In the code above, dict D contains directories as keys and list of fastq as values.
{ 'CND2': ['XD_L001_R1.fastq.gz', 'XD_L004_R1.fastq.gz'], 'CND1': ['UD_LOO3_R1.fastq.gz', 'UD_LOO4_R1.fastq.gz'] }
Doing a dry-run, snakemake complains me of missing files as follows
Missing input files for rule ML:
CND2/['XD_L001_R1.fastq.gz', 'XD_L004_R1.fastq.gz']
CND1/['UD_LOO3_R1.fastq.gz', 'UD_LOO4_R1.fastq.gz']
I want to understand what is the correct way to provide both a directory and a list of files as input together. Any help is greatly appreciated.
Thanks.
The error clearly explains the issue. As a result of the expand function, your input is two files:
CND2/['XD_L001_R1.fastq.gz', 'XD_L004_R1.fastq.gz']
CND1/['UD_LOO3_R1.fastq.gz', 'UD_LOO4_R1.fastq.gz']
If you need the input to be four files like that:
CND1/UD_LOO3_R1.fastq.gz
CND1/UD_LOO4_R1.fastq.gz
CND2/XD_L001_R1.fastq.gz
CND2/XD_L004_R1.fastq.gz
you need to flatten your dictionary:
inputs = [(dir, file) for dir, files in D.items() for file in files]
rule ML:
input:
expand( "{dir}/{files}", zip, dir=[row[0] for row in inputs], files=[row[1] for row in inputs])
or alternatively:
inputs = [(dir, file) for dir, files in D.items() for file in files]
rule ML:
input:
expand( "{filename}", filename=[f"{dir}/{file}" for dir, file in inputs])
Overall you are overcomplicating the problem. The same can be done without this ugly juggling the lists of tuples. glob_wildcards("{filename_base}_R1.fastq.gz") should return you more convenient representation.

Trying to get all paths in a YAML file

I've got an input YAML file (test.yml) as follows:
# sample set of lines
foo:
x: 12
y: hello world
ip_range['initial']: 1.2.3.4
ip_range[]: tba
array['first']: Cluster1
array2[]: bar
The source contains square brackets for some keys (possibly empty).
I'm trying to get a line by line list of all the paths in the file, ideally like:
foo.x: 12
foo.y: hello world
foo.ip_range['initial']: 1.2.3.4
foo.ip_range[]: tba
foo.array['first']: Cluster1
array2[]: bar
I've used the yamlpaths library and the yaml-paths CLI, but can't get the desired output. Trying this:
yaml-paths -m -s =foo -K test.yml
outputs:
foo.x
foo.y
foo.ip_range\[\'initial\'\]
foo.ip_range\[\]
foo.array\[\'first\'\]
Each path is on one line, but the output has all the escape characters ( \ ). Modifying the call to remove the -m option ("expand matching parent nodes") fixes that problem but the output is then not one path per line:
yaml-paths -s =foo -K test.yml
gives:
foo: {"x": 12, "y": "hello world", "ip_range['initial']": "1.2.3.4", "ip_range[]": "tba", "array['first']": "Cluster1"}
Any ideas how I can get the one line per path entry but without the escape chars? I was wondering if there is anything for path querying in the ruamel modules?
Your "paths" are nothing more than the joined string representation of the keys (and probably indices) of the
mappings (and potentially sequences) in your YAML document.
That can be trivially generated from data loaded from YAML with a recursive function:
import sys
import ruamel.yaml
yaml_str = """\
# sample set of lines
foo:
x: 12
y: hello world
ip_range['initial']: 1.2.3.4
ip_range[]: tba
array['first']: Cluster1
array2[]: bar
"""
def pathify(d, p=None, paths=None, joinchar='.'):
if p is None:
paths = {}
pathify(d, "", paths, joinchar=joinchar)
return paths
pn = p
if p != "":
pn += '.'
if isinstance(d, dict):
for k in d:
v = d[k]
pathify(v, pn + k, paths, joinchar=joinchar)
elif isinstance(d, list):
for idx, e in enumerate(d):
pathify(e, pn + str(idx), paths, joinchar=joinchar)
else:
paths[p] = d
yaml = ruamel.yaml.YAML(typ='safe')
paths = pathify(yaml.load(yaml_str))
for p, v in paths.items():
print(f'{p} -> {v}')
which gives:
foo.x -> 12
foo.y -> hello world
foo.ip_range['initial'] -> 1.2.3.4
foo.ip_range[] -> tba
foo.array['first'] -> Cluster1
array2[] -> bar
While Anthon's answer certainly produces the output you were after, I think your question was specifically about how to get the yaml-paths command to produce the desired output. I'll address that original question.
As of version 3.5.0, the yamlpath project's yaml-paths command supports a --noescape option which removes the escape symbols from output. Using your input file and the new option, you may find this output more to your liking:
$ yaml-paths --nofile --expand --keynames --noescape --values --search='=~/.*/' test.yml
foo.x: 12
foo.y: hello world
foo.ip_range['initial']: 1.2.3.4
foo.ip_range[]: tba
foo.array['first']: Cluster1
array2[]: bar
Note:
Using the --values option includes the value with each YAML Path.
For interest, I changed the --search expression to match every node in the input file rather than only the "foo" data.
The default output (without setting --noescape) produces YAML Paths which can be used as direct input into other YAML Path parsers and processors; setting --noescape changes this to render human-friendly paths which may not work as downstream YAML Path input.
Disclaimer: I am the author of the yamlpath project. Should you ever run into issues or have questions about it, please visit the project's GitHub project site and engage me via Issues (bugs and feature requests) or Discussions (questions). Thank you!

Snakemake - parameter file treated as a wildcard

I have written a pipeline in Snakemake. It's an ATAC-seq pipeline (bioinformatics pipeline to analyze genomics data from a specific experiment). Basically, until merging alignment step I use {sample_id} wildcard, to later switch to {sample} wildcard (merging two or more sample_ids into one sample).
working DAG here (for simplicity only one sample shown; orange and blue {sample_id}s are merged into one green {sample}
Tha all rule looks as follows:
configfile: "config.yaml"
SAMPLES_DICT = dict()
with open(config['SAMPLE_SHEET'], "r+") as fil:
next(fil)
for lin in fil.readlines():
row = lin.strip("\n").split("\t")
sample_id = row[0]
sample_name = row[1]
if sample_name in SAMPLES_DICT.keys():
SAMPLES_DICT[sample_name].append(sample_id)
else:
SAMPLES_DICT[sample_name] = [sample_id]
SAMPLES = list(SAMPLES_DICT.keys())
SAMPLE_IDS = [sample_id for sample in SAMPLES_DICT.values() for sample_id in sample]
rule all:
input:
# FASTQC output for RAW reads
expand(os.path.join(config['FASTQC'], '{sample_id}_R{read}_fastqc.zip'),
sample_id = SAMPLE_IDS,
read = ['1', '2']),
# Trimming
expand(os.path.join(config['TRIMMED'],
'{sample_id}_R{read}_val_{read}.fq.gz'),
sample_id = SAMPLE_IDS,
read = ['1', '2']),
# Alignment
expand(os.path.join(config['ALIGNMENT'], '{sample_id}_sorted.bam'),
sample_id = SAMPLE_IDS),
# Merging
expand(os.path.join(config['ALIGNMENT'], '{sample}_sorted_merged.bam'),
sample = SAMPLES),
# Marking Duplicates
expand(os.path.join(config['ALIGNMENT'], '{sample}_sorted_md.bam'),
sample = SAMPLES),
# Filtering
expand(os.path.join(config['FILTERED'],
'{sample}.bam'),
sample = SAMPLES),
expand(os.path.join(config['FILTERED'],
'{sample}.bam.bai'),
sample = SAMPLES),
# multiqc report
"multiqc_report.html"
message:
'\n#################### ATAC-seq pipeline #####################\n'
'Running all necessary rules to produce complete output.\n'
'############################################################'
I know it's too messy, I should only leave the necessary bits, but here my understanding of snakemake fails cause I don't know what I have to keep and what I should delete.
This is working, to my knowledge exactly as I want.
However, I added a rule:
rule hmmratac:
input:
bam = os.path.join(config['FILTERED'], '{sample}.bam'),
index = os.path.join(config['FILTERED'], '{sample}.bam.bai')
output:
model = os.path.join(config['HMMRATAC'], '{sample}.model'),
gappedPeak = os.path.join(config['HMMRATAC'], '{sample}_peaks.gappedPeak'),
summits = os.path.join(config['HMMRATAC'], '{sample}_summits.bed'),
states = os.path.join(config['HMMRATAC'], '{sample}.bedgraph'),
logs = os.path.join(config['HMMRATAC'], '{sample}.log'),
sample_name = '{sample}'
log:
os.path.join(config['LOGS'], 'hmmratac', '{sample}.log')
params:
genomes = config['GENOMES'],
blacklisted = config['BLACKLIST']
resources:
mem_mb = 32000
message:
'\n######################### Peak calling ########################\n'
'Peak calling for {output.sample_name}\n.'
'############################################################'
shell:
'HMMRATAC -Xms2g -Xmx{resources.mem_mb}m '
'--bam {input.bam} --index {input.index} '
'--genome {params.genome} --blacklist {params.blacklisted} '
'--output {output.sample_name} --bedgraph true &> {log}'
And into the rule all, after filtering, before multiqc, I added:
# Peak calling
expand(os.path.join(config['HMMRATAC'], '{sample}.model'),
sample = SAMPLES),
Relevant config.yaml fragments:
# Path to blacklisted regions
BLACKLIST: "/mnt/data/.../hg38.blacklist.bed"
# Path to chromosome sizes
GENOMES: "/mnt/data/.../hg38_sizes.genome"
# Path to filtered alignment
FILTERED: "alignment/filtered"
# Path to peaks
HMMRATAC: "peaks/hmmratac"
This is the error* I get (It goes on for every input and output of the rule). *Technically it's a warning but it halts execution of snakemake so I am calling it an error.
File path alignment/filtered//mnt/data/.../hg38.blacklist.bed.bam contains double '/'. This is likely unintended. It can also lead to inconsistent results of the file-matching approach used by Snakemake.
WARNING:snakemake.logging:File path alignment/filtered//mnt/data/.../hg38.blacklist.bed.bam contains double '/'. This is likely unintended. It can also lead to inconsistent results of the file-matching approach used by Snakemake.
It isn't actually ... - I just didn't feel safe providing an absolute path here.
For a couple of days, I have struggled with this error. Looked through the documentation, listened to the introduction. I understand that the above description is far from perfect (it is huge bc I don't even know how to work it down to provide minimal reproducible example...) but I am desperate and hope you can be patient with me.
Any suggestions as to how to google it, where to look for an error would be much appreciated.
Technically it's a warning but it halts execution of snakemake so I am calling it an error.
It would be useful to post the logs from snakemake to see if snakemake terminated with an error and if so what error.
However, in addition to Eric C.'s suggestion to use wildcards.sample instead of {sample} as file name, I think that this is quite suspicious:
alignment/filtered//mnt/data/.../hg38.blacklist.bed.bam
/mnt/ is usually at the root of the file system and you are prepending to it a relative path (alignment/filtered). Are you sure it is correct?

Snakemake, how to change output filename when using wildcards

I think I have a simple problem but I don't how to solve it.
My input folder contains files like this:
AAAAA_S1_R1_001.fastq
AAAAA_S1_R2_001.fastq
BBBBB_S2_R1_001.fastq
BBBBB_S2_R2_001.fastq
My snakemake code:
import glob
samples = [os.path.basename(x) for x in sorted(glob.glob("input/*.fastq"))]
name = []
for x in samples:
if "_R1_" in x:
name.append(x.split("_R1_")[0])
NAME = name
rule all:
input:
expand("output/{sp}_mapped.bam", sp=NAME),
rule bwa:
input:
R1 = "input/{sample}_R1_001.fastq",
R2 = "input/{sample}_R2_001.fastq"
output:
mapped = "output/{sample}_mapped.bam"
params:
ref = "refs/AF086833.fa"
run:
shell("bwa mem {params.ref} {input.R1} {input.R2} | samtools sort > {output.mapped}")
The output file names are:
AAAAA_S1_mapped.bam
BBBBB_S2_mapped.bam
I want the output file to be:
AAAAA_mapped.bam
BBBBB_mapped.bam
How can I or change the outputname or rename the files before or after the bwa rule.
Try this:
import pathlib
indir = pathlib.Path("input")
paths = indir.glob("*_S?_R?_001.fastq")
samples = set([x.stem.split("_")[0] for x in paths])
rule all:
input:
expand("output/{sample}_mapped.bam", sample=samples)
def find_fastqs(wildcards):
fastqs = [str(x) for x in indir.glob(f"{wildcards.sample}_*.fastq")]
return sorted(fastqs)
rule bwa:
input:
fastqs = find_fastqs
output:
mapped = "output/{sample}_mapped.bam"
params:
ref = "refs/AF086833.fa"
shell:
"bwa mem {params.ref} {input.fastqs} | samtools sort > {output.mapped}"
Uses an input function to find the correct samples for rule bwa. There might be a more elegant solution, but I can't see it right now. I think this should work, though.
(Edited to reflect OP's edit.)
Unfortunately, I've also had this problem with filenames with the following logic: {batch}/{seq_run}_{index}_{flowcell}_{lane}_{read_orientation}.fastq.gz.
I think that the core problem is that none of the individual wildcards are unique. Also, not all values for all wildcards can be combined; seq_run1 was run on lane1, not lane2. Therefore, expand() does not work.
After multiple attempts in Snakemake (see below), my solution was to standardize input with mv / sed / rename. Removing {batch}, {flowcell} and {lane} made it possible to use {sample}, a unique combination of {seq_run} and {index}.
What did not work (but it could be worth to try for others in the same situation):
Adding the zip argument to expand()
Renaming output using the following syntax:
output: "_".join(re.split("[/_]", "{full_filename}")[1,2]+".fastq.gz"

Resources