Conditional evaluation of knitr chunks with Rstudio "Run" button - rstudio

I am using conditional evaluation using the eval option in the chunk header. If I write eval=FALSE in the header, the chunk is not evaluated when I knit the document, and also not when I use the Run All (Ctrl+Alt+R) from the Rstudio menu.
The problem arises when I try to provide eval with a variable, e.g. the example below:
```{r setup}
ev_cars = TRUE
ev_pressure = FALSE
```
## First chunk
```{r cars, eval=ev_cars}
summary(cars)
```
## Second chunk
```{r pressure, echo=FALSE, eval = ev_pressure}
plot(pressure)
```
In this example, when I run knitr, then the first chunk is evaluated and the second chunk is not (because ev_pressure=FALSE). However, when I try to run using he Run All (Ctrl+Alt+R) from the Rstudio menu, both chunks are evaluated.
Is there a way to overcome this issue?
I am using Rstudio v 1.1
All the best,
Gil

EDIT: {The chunk options are only used when you knit. The Run All command does not knit the document but execute what is inside chunks, without reading chunks arguments.} This is not totally true, indeed, if eval is set to FALSE or TRUE, it is taken into account.
{Thus, a} way to add options like not executing code inside chunks when running Run All would be to do it in the old way with an if inside the chunk.
```{r setup}
ev_cars = TRUE
ev_pressure = FALSE
```
## First chunk
```{r cars}
if (ev_cars) {
summary(cars)
}
```
## Second chunk
```{r pressure, echo=FALSE}
if (ev_pressure) {
plot(pressure)
}
```
The code is then heavier in this way. But if you use Run All, why not directly knit it ?

Related

In R markdown in RStudio, how can I prevent the source code from running off a pdf page when bash language is used?

I'm writing a technical book using Bookdown and RStudio. My code chunks are mainly using bash. Everything works fine except when I export the book to pdf, then the source code is partially out of the "box" and even the page if this is long enough. I have read a lot of solutions when r language is used, but none of these solutions works when bash language is used.
Here is my code at the beginning of the .Rmd file:
```{r, global_options, include=FALSE}
knitr::opts_chunk$set(message=FALSE, eval=FALSE,
tidy.opts=list(width.cutoff=60), tidy=TRUE)
```
And then when I write the code chunk:
```{bash}
mongodump --uri="mongodb+srv://cluster0.rh6qzzz.mongodb.net/" --db sample_mflix --username my_username
```
The outputs were produced as shown below (see the end of line):
I would like to avoid this, but I have not found the solution.

Python chunk echos in r-markdown, but not r-notebook

I like to take notes inside of r-notebooks and have recently been trying to incorporate python code chunks into some of my documents. I have no problems executing python chunks and displaying the output, but I get different behavior depending on whether I'm using an R-notebook or an R-markdown document. While R-markdown will echo the python code and display the output, R-notebook will only display the output. I have tried explicitly stating echo=T in the chunk, bit it does not change the outcome. Any thoughts on how to get the appropriate behavior in R-notebooks?
EDIT: Will also add that this behavior in notebooks only appears to happen when the code chunk has a printed output. A chunk that does not print will echo code correctly.
Below is an example:
R-notebook example
---
title: "R Notebook"
output: html_notebook
---
Example R-notebook
```{r setup, include=FALSE}
library(knitr)
library(reticulate)
knitr::knit_engines$set(python = reticulate::eng_python)
```
```{python}
print("Hello world")
```
R-markdown example
---
title: "R Notebook"
output: html_document
---
Example R-notebook
```{r setup, include=FALSE}
library(knitr)
library(reticulate)
knitr::knit_engines$set(python = reticulate::eng_python)
```
```{python}
print("Hello world")
```

Tensorflow image loading gives OutOfRangeError, regardless of which session initializer is used

I copied a test script to load a directory of images into Tensorflow:
# Typical setup to include TensorFlow.
import tensorflow as tf
from sys import argv
# Make a queue of file names including all the JPEG images files in the relative
# image directory.
filename_queue = tf.train.string_input_producer(
tf.train.match_filenames_once(argv[1] + "/*.jpg"))
# Read an entire image file which is required since they're JPEGs, if the images
# are too large they could be split in advance to smaller files or use the Fixed
# reader to split up the file.
image_reader = tf.WholeFileReader()
# Read a whole file from the queue, the first returned value in the tuple is the
# filename which we are ignoring.
_, image_file = image_reader.read(filename_queue)
# Decode the image as a JPEG file, this will turn it into a Tensor which we can
# then use in training.
image_orig = tf.image.decode_jpeg(image_file)
image = tf.image.resize_images(image_orig, [224, 224])
image.set_shape((224, 224, 3))
# Start a new session to show example output.
with tf.Session() as sess:
However, when I ran the script, I received an odd bug:
OutOfRangeError (see above for traceback): FIFOQueue '_1_input_producer' is closed and has insufficient elements (requested 1, current size 0)
And when I tried to look up a solution, I got several different answers:
tf.initialize_all_variables().run()
tf.local_variables_initializer().run()
sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
I have tried all of those options, and all of them have failed. The original script (https://gist.github.com/eerwitt/518b0c9564e500b4b50f) has barely 40 lines. What solution am I missing?
UPDATE
I'm now running this:
# Start a new session to show example output.
with tf.Session() as sess:
# Required to get the filename matching to run.
sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
# Coordinate the loading of image files.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
# Get an image tensor and print its value.
image_tensor = sess.run([image])
print(image_tensor)
# Finish off the filename queue coordinator.
coord.request_stop()
coord.join(threads)
And the error still occurs.
You need to initialize both local and global variables for some reason. I don't know exactly why though. Anyway match_filenames_once returns a local variable which is not initialized simply by using tf.global_variables_initializer().
So, to your problem adding:
with tf.Session() as sess:
sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
# actual code should go here
coord.request_stop()
coord.join(threads)
should solve the problem.
tf.initialize_all_variables() is the old way of initialization and I think it used to initialize both global and local variables when it was the legal way of initialization. Nowadays it's considered deprecated and initializes only the global variables. So, some sources that use old style coding do not report any problem in executing the code but in newer tensorflow versions the same breaks down.

Help with Ruby & PrinceXML

I'm trying to write a very simple markdown-like converter in ruby, then pass the output to PrinceXML (which is awesome). Prince basically converts html to pdf.
Here's my code:
#!/usr/bin/ruby
# USAGE: command source-file.txt target-file.pdf
# read argument 1 as input
text = File.read(ARGV[0])
# wrap paragraphs in paragraph tags
text = text.gsub(/^(.+)/, '<p>\1</p>')
# create a new temp file for processing
htmlFile = File.new('/tmp/sample.html', "w+")
# place the transformed text in the new file
htmlFile.puts text
# run prince
system 'prince /tmp/sample.html #{ARGV[1]}'
But this dumps an empty file to /tmp/sample.html. When I exclude calling prince, the conversion happens just fine.
What am I doing wrong?
It's possible that the file output is being buffered, and not written to disk, because of how you are creating the output file. Try this instead:
# create a new temp file for processing
File.open('/tmp/sample.html', "w+") do |htmlFile|
# place the transformed text in the new file
htmlFile.puts text
end
# run prince
system 'prince /tmp/sample.html #{ARGV[1]}'
This is idiomatic Ruby; We pass a block to File.new and it will automatically be closed when the block exits. As a by-product of closing the file, any buffered output will be flushed to disk, where your code in your system call can find it.
From the fine manual:
prince doc.html -o out.pdf
Convert doc.html to out.pdf.
I think your system call should look like this:
system "prince /tmp/sample.html -o #{ARGV[1]}"
Also note the switch to double quotes so that #{} interpolation will work. Without the double quotes, the shell will see this command:
prince /tmp/sample.html #{ARGV[1]}
and then it will ignore everything after # as a comment. I'm not sure why you end up with an empty /tmp/sample.html, I'd expect a PDF in /tmp/sample.pdf based on my reading of the documentation.

How can I achieve a Unix Tail operation without using files. In Ruby

I used Ruby to read an image file and save that into a string.
partial_image100 = File.read("image.tga")
partial_image99 = File.read("image.tga")
partial_image98 = File.read("image.tga")
...
I read those images at one end of a distributed system. In another system I want to do a Tail operation. The system receives just the images.
I have around a 100 partial images. I want to do a Tail operation, like this:
tail -c +19 image100 >> image99
tail -c +19 image99 >> image98
tail -c +19 image97 >> image96
...
Basically it just removes the first 18 bytes of the partial image and append what is left to the next image.
The problem is that this is slow. Calling 100 unix commands from Ruby is slow. I want to refactor this so that this happen in Ruby world. Just in memory. No files.
How can I do this in Ruby?
Thanks
edit:
The images are stored in a hash like this:
{"27"=>"\u0000\u0000\u0002\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u000E\u0001\xD0\a\xD0\a\u0018 \xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF...
EDIT:
You have all the relevant code here: https://gist.github.com/989563
There are two files. The code and a hash object encoded in json in a file. When you run the code there will be two image files created at /tmp
/tmp/image-tail-merger.tga – The output from the tail-merge algorithm
/tmp/image-/time/.tga – the output from the in-memory-tail algorithm
Currently the in-memory algorithm fails because the generated image is a Picasso.
If you manage to make the in-memory-algorithm generate the same image that the tail-merge algorithm do then you have succeeded.
EDIT:
I got it right finally!!!
Here is the code
https://gist.github.com/989563
I might look at File::Tail, similar to the Perl module.
File.open(filename) do |log|
log.extend(File::Tail)
log.interval = 10
log.backward(10)
log.tail { |line| puts line }
end
You can also monkey-patch your own File to use File::Tail as well for cleaner usage.
You may want to take a look at String#unpack (and its inverse Array#pack).
In your case some like that should do what you want:
trunked = image.unpack('#19c*').pack('c*')
You might try something like this
image100 = "some image string"
image99 = "some other image string"
image99 += image100.slice(0,19)
EDIT: In your specific example you could do this to iterate through the entire image
(image_hash.size..1).each do i
# Here we use slice to select everything *except* the first 19 bytes
# Note: To select just the first 19 bytes we could do slice(0,19)
# To select just the last 19 bytes we could do slice(-19,19)
# We then append this result to the next image down the line
image_hash[i-1] += image_hash[i].slice(19,image_hash[i].size-19)
end
If you want to remove the "tailed" bits permanently you can use slice! to do an inline replace.
Maybe a bit cleaner:
# Strip the headers
image_hash.each { |k,v| v.slice!(0,19) }
# Append them together
(image_hash.keys.sort).collect{ |i| image_hash[i] }.join
EDIT: Working code example https://gist.github.com/989563

Resources