I have a long loop in an rmd notebook. For testing, I've put debug print statements every 1,000 iterations so I can see the loop progress.
I'd like to keep that debug output there while I'm doing my design/testing/debugging but omit that output when knitting my notebook. Is there an easy way to do that?
Use results="hide" as a chunk option, as in:
```{r results="hide"}
# r code
```
Related
I tride to duplicate the output of the ninja build system to the separate file but I want to save original campresed look of the ninja output.
If I tee ninja (ninja all | tee -a someFile) I get wall of text enter image description here
Instead of updating one line.
If there is a better way to duplicate the output of ninja to file without loosing the compress formatig of output please let me know!
UPD: I find out that ninja update lines with [K escape sequence (erasing the line) and after capturing or rederecting ninga output it vanishing. If some body know how to allowed system to capture all tipy of escape sequence, it will solve my problem
I have a Nextflow process that uses a bash script (check_bam.sh) to generate a text file. The only options for the contents of that text file are either a 0 or any other number. I would like to extract that 0 or the other value and save it to a Nextflow variable, to be able to use a conditional, in the way that if the content of the file is a 0, the Nextflow script should skip some processes, and if it's any other number that is not zero, the execution should be carried out completely. I am not having problems with the use of Nextflow conditionals and setting channels to empty, but in the part of saving that value that is generated inside the script part into a Nextflow variable to use outside processes.
The process that generates the file (result_bam.txt) with the 0 or other number is as follows (I have simplified it to make it as clear as possible):
process CHECK_BAM {
input:
path bam from channel_bam
output:
path "result_bam.txt"
path "result_bam.txt" into channel_check_bam
script:
"""
bash bin/check_bam.sh $bam > result_bam.txt
"""
What I am checking is the number of mapped reads in the BAM file, and I would like to save that number into a Nextflow variable because if the number is zero, the execution should skip most of the following processes, but if the number is different than zero, it means that there are mapped reads in the file and the execution should continue as intended.
I have thought that maybe using cat result_bam.txt > $FOO or FOO=``cat result_bam.txt` could be a solution but I don't know how to properly save it so the variable is usable between processes.
Use an env channel to grab the data from FOO=``cat result_bam.txt and turn it into a channel.
Few things come into my mind there, hopefully I understand your problem well. Is check_bam.sh only counting lines of BAM file?
The first option for me would be to check if there is possibility for you, to check if the BAM file has content from your pipeline. This might be useful: countLines_documentation. You should be cautious here, as huge BAM file can lead to memory exception (countLines "loads" the file).
Second option, maybe better, is to pass file result_bam.txt into channel channel_check_bam, and then, following process should be run regarding if the content of file (the number in file result_bam.txt) is greater than 0. So, when you are connecting this channel to other process, you should read the content as:
input:
val bam_lines from channel_check_bam.map{ it.readLines() } // Gives a list of lines, so 1st line will be your number of mapped reads.
when:
bam_lines[0].toInteger() > 0
This way it should be run only when number in result_bam.txt is > 0.
I was testing that with DSL2, so the code might need some little changes - but it works.
Cris Tuñí - Edit: 08/24/2021
Thanks to the help of DawidGaceck I could edit my processes to run only when the number in the file was different than zero. My code ended looking like this:
process CHECK_BAM {
input:
path bam from channel_bam
output:
path "result_bam.txt"
path "result_bam.txt" into channel_check_bam_process1,
channel_check_bam_process2
script:
"""
bash bin/check_bam.sh $bam > result_bam.txt
"""
process PROCESS1 {
input:
val bam_lines from channel_check_bam_process1.map{ it.readLines() }
when:
bam_lines[0].toInteger() > 0
script:
"""
foo bar baz
"""
Hope this helps anyone with the same question or a similar issue!
Example in Julia: let's say I have a file "test.jl" with the following code:
for i in 1:3
sleep(2)
print("i = ", i, "\n")
end
Then if I run
nohup julia test.jl &
I won't get each print result every 2 seconds. Instead, I will get all three print results at the end, which is useless for monitoring the progress of a loop that takes forever to complete.
Any suggestions?
It's likely that output buffering is occurring. Try manually flushing standard output:
for i in 1:3
sleep(2)
print("i = ", i, "\n")
flush(stdout)
end
nohup redirects standard output to $HOME/nohup.out until the process is completed then the results are printed onto screen all at once.
https://linux.die.net/man/1/nohup
you can still redirect it live I think, something like tee might work or >&1. Not sure would have to check.
Let's say I'm in a buffer like this, on line 4, I want to run line 1 to 2 and have the output in the same buffer on line 4 (where cursor is):
echo "Testing"
echo "more testing"
# and here I want the output from running lines 1 to 2
...I know I can do 1,2w !sh to run lines 1 and 2 and have the output shown in whatever that temporary buffer is. But, how do I get into my actual buffer for later editing?
(And the same thing to work with visual mode selected text, not just with line ranges given by numbers.)
You were using :w !... (:help :w_c), but you probably want :! (:help :!):
gg - go to top
Vj - select the two lines
y - yank into a buffer
4gg - go to 4th line
V - select it
p - paste over it
gv - reselect the pasted range
:!sh<CR> - execute in shell and replace
or, trusting ex commands more,
:4d
:1,2y
:3pu
:4,5!sh
NB: !sh is in most cases equivalent to !, as ! will call your default shell.
Yey! Found it. In case anyone else needs this exact same hack on a virgin/foreign vim (plugin-less or someone else's server/config):
:1,2r !sh %
(yeah, output goes after commands, or technically the commands are replaced with their echo but whatever, not at cursor position, but good enough for me to replicate my Sublime + SublimeCommand workflow in vim :) )
I've run command line programs that output a line, and then update that line a moment later. But with ruby I can only seem to output a line and then another line.
What I have being output now:
Downloading file:
11MB 294K/s
12MB 307K/s
14MB 294K/s
15MB 301K/s
16MB 300K/s
Done!
And instead, I want to see this:
Downloading file:
11MB 294K/s
Followed a moment later by this:
Downloading file:
16MB 300K/s
Done!
The line my ruby script outputs that shows the downloaded filesize and transfer speed would be overwritten each time instead of listing the updated values as a whole new line.
I'm currently using puts to generate output, which clearly isn't designed for this case. Is there a different output method that can achieve this result?
Use \r to move the cursor to the beginning of the line. And you should not be using puts as it adds \n, use print instead. Like this:
print "11MB 294K/s"
print "\r"
print "12MB 307K/s"
One thing to keep in mind though: \r doesn't delete anything, it just moves the cursor back, so you would need to pad the output with spaces to overwrite the previous output (in case it was longer).
By default when \n is printed to the standard output the buffer is flushed. Now you might need to use STDOUT.flush after print to make sure the text get printed right away.