I got the following problem:
I use MACS2 (2.1.0.20140616) with the following short commandline:
macs2 callpeak -t file.bam -f bam -g 650000000 -n Test -B --nomodel -q 0.01
It seems to work as I want, but when I convert the .bamfile into .bed via
bedtools bamtobed -i file.bam > file.bed
and use MACS2 on this, I get a lot more peaks. As far as I understand, the .bed-file should contain the same information as the .bam-file, so that's kinda odd.
Any suggestions what's the problem?
Thanks!
Related
I'm trying to use qsub to submit multiple parallel jobs, but I'm running into trouble with passing parameter substitutions into qsub. I'm using the -V option, but it doesn't seem to recognize what ${variable} is. Here's some code I tried running:
qsub -cwd -V -pe shared 4 -l h_data=8G,h_rt=00:10:00,highp -N bt2align3 -b y "projPath="$SCRATCH/CUTnTag/data_kayaokur2020"; sample="K4m3_rep1"; cores=8;
bowtie2 --end-to-end --very-sensitive --no-mixed --no-discordant --phred33 -I 10 -X 700
-p ${cores}
-x ${projPath}/bowtie2_index/GRCh38_noalt_analysis/GRCh38_noalt_as
-1 ${projPath}/raw_fastq/${sample}_R1.fastq.gz
-2 ${projPath}/raw_fastq/${sample}_R2.fastq.gz
-S ${projPath}/alignment/sam/${sample}_bowtie2.sam &> ${projPath}/alignment/sam/bowtie2_summary/${sample}_bowtie2.txt"
I just get an error that says "Invalid null command."
Is qsub not able to recognize parameter expansions? Is there a different syntax I should be using? Thanks.
I'm trying to download multiple files in parallel using xargs. Things worked so well if I only download the file without given name. echo ${links[#]} | xargs -P 8 -n 1 wget. Is there any way that allow me to download with filename like wget -O [filename] [URL] but in parallel?
Below is my work. Thank you.
links=(
"https://apod.nasa.gov/apod/image/1901/sombrero_spitzer_3000.jpg"
"https://apod.nasa.gov/apod/image/1901/orionred_WISEantonucci_1824.jpg"
"https://apod.nasa.gov/apod/image/1901/20190102UltimaThule-pr.png"
"https://apod.nasa.gov/apod/image/1901/UT-blink_3d_a.gif"
"https://apod.nasa.gov/apod/image/1901/Jan3yutu2CNSA.jpg"
)
names=(
"file1.jpg"
"file2.jpg"
"file3.jpg"
"file4.jpg"
"file5.jpg"
)
echo ${links[#]} ${names[#]} | xargs -P 8 -n 1 wget
With GNU Parallel you can do:
parallel wget -O {2} {1} ::: "${links[#]}" :::+ "${names[#]}"
If a download fails, GNU Parallel can also retry commands with --retry 3.
I want to output wttr.in in to a file with curl. The problem is that the output isn't how it would be when i just surf wttr.in.
What i did is:
curl wttr.in -o ~/wt.tex and curl wttr.in -o ~/wt
The output is like: <output>
It should be https://wttr.in.
I solved my self:
less -r -f -L wt.tex
-r controlls the binary characters
-f forces to open the the file with out asking.
I am using Waif2x to upscale a series of images, but I am having a problem with the command I am running. I would try to troubleshoot it myself, yet I can't not make use of the error output. It reads:
âGâëü[: âéâfâïâtâ#âCâïé¬èJé»é▄é╣é±é┼é╡é╜
I think it is in Japanese as evidenced by Waifu2x's Github page. I also believe it is the same everytime, yet I cant know for sure. I am using a English Computer, and I am also a English speaker, so I really need to know what it is in English or something I can Google translate.
I have already tried the solution here as evidenced by looking at the regedit, where Name=00, Data=Consolas.
Regarding my specific problem, the Command I am typing into cmd is
waifu2x-caffe-cui -i "C:\Users\Christian\workspace\CodeLyokoUpscaleing\bin\480Frames" -e png -l png -m noise_scale -d 16 -h 1440 -n 1 -p cudnn -c 256 -b 1 --auto_start 1 --auto_exit 1 --no_overwrite 1 -y upconv_7_anime_style_art_rgb -o "C:\Users\Christian\workspace\CodeLyokoUpscaleing\bin\1440Frames"
I really think it should work as I converted it from another batch file I created that contained variables instead of file paths
waifu2x-caffe-cui -i "%~dp0480Frames" -e png -l png -m noise_scale -d 16 -h 1440 -n 1 -p cudnn -c 256 -o "%~dp01440Frames" --auto_start 1 --auto_exit 1 --no_overwrite 1 -y upconv_7_anime_style_art_rgb
But I still get the weird output.
How can I see what the error is?
I am trying to extract URLS from a webpage using wget. I tried this
wget -r -l2 --reject=gif -O out.html www.google.com | sed -n 's/.*href="\([^"]*\).*/\1/p'
It is displaiyng FINISHED
Downloaded: 18,472 bytes in 1 files
But not displaying the weblinks. If I try to do it seperately
wget -r -l2 --reject=gif -O out.html www.google.com
sed -n 's/.*href="\([^"]*\).*/\1/p' < out.html
Output
http://www.google.com/intl/en/options/
/intl/en/policies/terms/
It is not displaying all the links
ttp://www.google.com
http://maps.google.com
https://play.google.com
http://www.youtube.com
http://news.google.com
https://mail.google.com
https://drive.google.com
http://www.google.com
http://www.google.com
http://www.google.com
https://www.google.com
https://plus.google.com
And more over I want to get links from 2nd level and more can any one give a solution for this
Thanks in advance
The -O file option captures the output of wget and writes it to the specified file, so there is no output going through the pipe to sed.
You can say -O - to direct wget output to standard output.
If you don't want to use grep, you can try
sed -n "/href/ s/.*href=['\"]\([^'\"]*\)['\"].*/\1/gp"