isIncomplete(con) error when knitting pdf - rstudio

I am able to run the following example script from the SWSamp package fine in the console, but I get an "isIncomplete(con)" error when knitting to PDF. This package requires use of at least 2 cores, and I'm wondering if this connection error is related to trying to run in parallel.
---
title: "Test"
output: pdf_document
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
```{r}
install.packages("http://www.statistica.it/gianluca/SWSamp/SWSamp_0.3.tar.gz", type="source", dependencies=TRUE, repos=NULL)
library(SWSamp)
```
```{r}
mu1=0.3
b.trt=-0.3875
sigma.e=1.55
J=5
K=20
sig.level=0.05
n.sims=10
rho=0.5
pow.cont <- sim.power(I=14, J=J, H=NULL, K=K, rho=rho, mu=mu1, sigma.e=sigma.e,
b.trt=b.trt, formula=NULL, n.sims=n.sims,
sig.level=sig.level, n.cores=2)
pow.cont$power
```
Quitting from lines 16-28 (test.Rmd) Error in isIncomplete(con) :
invalid connection Calls: ... evaluate_call ->
handle_output -> -> isIncomplete Quitting from lines 16-28
(test.Rmd) Error in isOpen(con) : invalid connection Calls:
... in_dir -> evaluate -> evaluate_call -> ->
isOpen Error in close.connection(con) : invalid connection Calls:
-> -> close -> close.connection Execution
halted
> sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: macOS Sierra 10.12.1
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] SWSamp_0.3 lme4_1.1-12 Matrix_1.2-7.1
loaded via a namespace (and not attached):
[1] Rcpp_0.12.8 knitr_1.15.1 magrittr_1.5 splines_3.3.2 MASS_7.3-45
[6] doParallel_1.0.10 lattice_0.20-34 foreach_1.4.3 minqa_1.2.4 stringr_1.1.0
[11] tools_3.3.2 parallel_3.3.2 grid_3.3.2 nlme_3.1-128 htmltools_0.3.5
[16] iterators_1.0.8 yaml_2.1.14 rprojroot_1.1 digest_0.6.10 nloptr_1.0.4
[21] codetools_0.2-15 evaluate_0.10 rmarkdown_1.3 stringi_1.1.2 compiler_3.3.2
[26] backports_1.0.4

The sim.power() function runs closeAllConnections(), which interferes with knitr according to this SO answer. The solution in this answer was to wrap the expression in quotes and the evaluate() function, but it did not work for me.
As a temporary solution, I redefined the sim.power() function with the closeAllConnections() call commented out. I don't know if this has adverse effects, but seems to run for me. Happy to accept better answers.

Related

Error message: Cube::init(): requested size is too large; suggest to enable ARMA_64BIT_WORD using sommer package for GWAS

Output from sessionInfo()
R version 4.0.5 (2021-03-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] sommer_4.1.4 crayon_1.4.1 lattice_0.20-41 MASS_7.3-53.1 Matrix_1.3-2 data.table_1.14.0
loaded via a namespace (and not attached):
[1] compiler_4.0.5 tools_4.0.5 rstudioapi_0.13 Rcpp_1.0.6 grid_4.0.5
I have been trying to carry out a gwas using the sommer package with the following code:
var_cov <- A.mat(m_matrix) ## aditive relationship matrix
model <- GWAS(cbind(DW20, PLA07, PLA08, PLA09, PLA10, PLA11, PLA12, PLA13, PLA14, PLA15, PLA16, PLA17, PLA18, RGR07_09, RGR08_10, RGR09_11, RGR10_12, RGR11_13, RGR12_14, RGR13_15, RGR14_16, RGR15_17, RGR16_18, SA, SL, SW) ~ 1, random = ~ vs(accession, Gu = var_cov), data = pheno2, M = m_matrix, gTerm = "u:accession", n.PC = 5)
As described in the code, I have 26 traits and I would like to use the K+P model. My SNPs matrix has
211 260 markers and 309 accessions.
When I run this code for one and two traits, it works fine. But, when I try to run with all the 26 traits I get the error message:
Error in GWAS(cbind(DW20, PLA07, PLA08, PLA09, PLA10, PLA11, PLA12, PLA13, :
Cube::init(): requested size is too large; suggest to enable ARMA_64BIT_WORD
I searched online and found that this error is related to the package RcppArmadillo.
Following the suggestions here (http://arma.sourceforge.net/docs.html#config_hpp_arma_64bit_word) and
here (Large Matrices in RcppArmadillo via the ARMA_64BIT_WORD define), I tried to enable the ARMA_64BIT_WORD by uncommenting the line #define ARMA_64BIT_WORD (bellow) in the file RcppArmadillo\include\armadillo_bits\config.hpp:
#if !defined(ARMA_64BIT_WORD)
//#define ARMA_64BIT_WORD
//// Uncomment the above line if you require matrices/vectors capable of holding more than 4 billion elements.
//// Note that ARMA_64BIT_WORD is automatically enabled when std::size_t has 64 bits and ARMA_32BIT_WORD is not defined.
#endiff
and also including the following line in the file Makevars.win in RcppArmadillo\skeleton.
PKG_CPPFLAGS = -DARMA_64BIT_WORD=1
None of the suggestions worked and I continue getting the same error message. My questions are: is there another option to enable the ARMA_64BIT_WORD that I am missing? Is it possible to run the GWAS function in sommer package with as many traits as 26 or this number is too much? Would the error message result from a mistake in the GWAS code?
Thank you very much in advance.
My first take Ana is that you're trying to fit an unstructured multivariate model with 26 traits when you use cbind(), that means that if you have 1000 records, this will be a model of 309 x 26 = 8,034 records which would be a bit too big for the direct inversion algorithm that sommer uses, plus the number of parameters to estimate are a lot (think all the covariance parameters (26*25)/2 = 325. I would suggest fitting a GWAS per trait in a for loop to solve your issue. Unless you have a good justification to run a multivariate GWAS this is the issue with your analysis more than the C++ code behind. For example:
var_cov <- A.mat(m_matrix) ## aditive relationship matrix
traits <- c(DW20, PLA07, PLA08, PLA09, PLA10, PLA11, PLA12, PLA13, PLA14, PLA15, PLA16, PLA17, PLA18, RGR07_09, RGR08_10, RGR09_11, RGR10_12, RGR11_13, RGR12_14, RGR13_15, RGR14_16, RGR15_17, RGR16_18, SA, SL, SW)
for(itrait in traits){
model <- GWAS(as.formula(paste(itrait,"~1")), random = ~ vs(accession, Gu = var_cov), data = pheno2, M = m_matrix, gTerm = "u:accession", n.PC = 5)
}
If it turns out that even with a single trait the arma::cube function presents memory issues then definitely we need to look at why the armadillo library cannot deal with those dimensions.
Cheers,
Eduardo

How to debug with PureScript?

Issue
Following is a minimal, contrived example:
read :: FilePath -> Aff String
read f = do
log ("File: " <> f) -- (1)
readTextFile UTF8 f -- (2)
I would like to do some debug logging in (1), before a potential error on (2) occurs. Executing following code in Spago REPL works for success cases so far:
$ spago repl
> launchAff_ $ read "test/data/tree/root.txt"
File: test/data/tree/root.txt
unit
Problem: If there is an error with (2) - file is directory here - , (1) seems to be not executed at all:
$ spago repl
> launchAff_ $ read "test/data/tree"
~/purescript-book/exercises/chapter9/.psci_modules/node_modules/Effect.Aff/foreign.js:532
throw util.fromLeft(step);
^
[Error: EISDIR: illegal operation on a directory, read] {
errno: -21,
code: 'EISDIR',
syscall: 'read'
}
The original problem is more complex including several layers of recursions (see E-Book exercise 3), where I need logging to debug above error.
Questions
How can I properly log regardless upcoming errors here?
(Optional) Is there a more sophisticated, well-established debugging alternative - purescript-debugger? A decicated VS Code debug extension/functionality would be the cherry on the cake.
First of all, the symptoms you observe do not mean that the first line doesn't execute. It does always execute, you're just not seeing output from it due to how console works in the PureScript REPL. The output gets swallowed. Not the only problem with REPL, sadly.
You can verify that the first line is always executed by replacing log with throwError and observing that the error always gets thrown. Or, alternatively, you can make the first line modify a mutable cell instead of writing to the console, and then examine the cell's contents.
Finally, this only happens in REPL. If you put that launchAff_ call inside main and run the program, you will always get the console output.
Now to the actual question at hand: how to debug trace.
Logging to console is fine if you can afford it, but there is a more elegant way: Debug.trace.
This function has a hidden effect - i.e. its type says it's pure, but it really produces an effect when called. This little lie lets you use trace in a pure setting and thus debug pure code. No need for Effect! This is ok as long as used for debugging only, but don't put it in production code.
The way it works is that it takes two parameters: the first one gets printed to console and the second one is a function to be called after printing, and the result of the whole thing is whatever that function returns. For example:
calculateSomething :: Int -> Int -> Int
calculateSomething x y =
trace ("x = " <> show x) \_ ->
x + y
main :: Effect Unit
main =
log $ show $ calculateSomething 37 5
> npx spago run
'x = 37'
42
The first parameter can be anything at all, not just a string. This lets you easily print a lot of stuff:
calculateSomething :: Int -> Int -> Int
calculateSomething x y =
trace { x, y } \_ ->
x + y
> npx spago run
{ x: 37, y: 5 }
42
Or, applying this to your code:
read :: FilePath -> Aff String
read f = trace ("File: " <> f) \_ -> do
readTextFile UTF8 f
But here's a subtle detail: this tracing happens as soon as you call read, even if the resulting Aff will never be actually executed. If you need tracing to happen on effectful execution, you'll need to make the trace call part of the action, and be careful not to make it the very first action in the sequence:
read :: FilePath -> Aff String
read f = do
pure unit
trace ("File: " <> f) \_ -> pure unit
readTextFile UTF8 f
It is, of course, a bit inconvenient to do this every time you need to trace in an effectful context, so there is a special function that does it for you - it's called traceM:
read :: FilePath -> Aff String
read f = do
traceM ("File: " <> f)
readTextFile UTF8 f
If you look at its source code, you'll see that it does exactly what I did in the example above.
The sad part is that trace won't help you in REPL when an exception happens, because it's still printing to console, so it'll still get swallowed for the same reasons.
But even when it doesn't get swallowed, the output is a bit garbled, because trace actually outputs in color (to help you make it out among other output), and PureScript REPL has a complicated relationship with color:
> calculateSomething 37 5
←[32m'x = 37'←[39m
42
In addition to Fyodor Soikin's great answer, I found a variant using VS Code debug view.
1.) Make sure to build with sourcemaps:
spago build --purs-args "-g sourcemaps"
2.) Add debug configuration to VS Code launch.json:
{
"version": "0.2.0",
"configurations": [
{
"type": "pwa-node",
"request": "launch",
"name": "Launch Program",
"skipFiles": ["<node_internals>/**"],
"runtimeArgs": ["-e", "require('./output/Main/index.js').main()"],
"smartStep": true // skips files without (valid) source map
}
]
}
Replace "./output/Main/index.js" / .main() with the compiled .js file / function to be debugged.
3.) Set break points and step through the .purs file via sourcemap support.

Segmentation fault with TensorFlow.jl with Julia 0.7.0 on MacOS

I’m trying to install TensorFlow.jl on MacOS for Julia v0.7.0. I have the following packages installed:
(v0.7) pkg> status
Status `~/.julia/environments/v0.7/Project.toml`
[336ed68f] CSV v0.3.1
[a93c6f00] DataFrames v0.13.1
[7073ff75] IJulia v1.12.0
[438e738f] PyCall v1.18.4
[1d978283] TensorFlow v0.10.2
When running the following MWE
using TensorFlow
sess = Session()
feature_columns = placeholder(Float32)
target_columns = placeholder(Float32)
m=Variable(0.05)
b=Variable(0.0)
y=m.*feature_columns+b
loss=reduce_sum((target_columns - y).^2)
learning_rate=0.0000001;
my_optimizer=train.minimize(train.GradientDescentOptimizer(learning_rate), loss)
for the first time, I get the error message
On worker 2:
LoadError: ArgumentError: Package MacroTools not found in current path:
- Run `Pkg.add("MacroTools")` to install the MacroTools package.
After adding MacroTools.jl via the package manager and rerunning the example, the Julia kernel crashes with a segmentation fault:
signal (11): Segmentation fault: 11
in expression starting at no file:0
TF_GraphGetTensorNumDims at /Users/sd/.julia/packages/TensorFlow/A6TdG/deps/usr/bin/libtensorflow.dylib (unknown line)
get_tensor_num_dims at /Users/sd/.julia/packages/TensorFlow/A6TdG/src/core.jl:25
get_shape at /Users/sd/.julia/packages/TensorFlow/A6TdG/src/core.jl:1823
show at /Users/sd/.julia/packages/TensorFlow/A6TdG/src/show.jl:50
show at ./sysimg.jl:195
jl_fptr_trampoline at /Users/osx/buildbot/slave/package_osx64/build/src/gf.c:1829
display at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/REPL/src/REPL.jl:131
display at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/REPL/src/REPL.jl:135
display at ./multimedia.jl:287
#7 at /Users/sd/.julia/packages/Media/Lrdeg/src/compat.jl:28
hookless at /Users/sd/.julia/packages/Media/Lrdeg/src/compat.jl:14
render at /Users/sd/.julia/packages/Media/Lrdeg/src/compat.jl:27
render at /Users/sd/.julia/packages/Media/Lrdeg/src/system.jl:160
display at /Users/sd/.julia/packages/Media/Lrdeg/src/compat.jl:9
display at ./multimedia.jl:287
jl_fptr_trampoline at /Users/osx/buildbot/slave/package_osx64/build/src/gf.c:1829
jl_apply at /Users/osx/buildbot/slave/package_osx64/build/src/./julia.h:1538 [inlined]
jl_f__apply at /Users/osx/buildbot/slave/package_osx64/build/src/builtins.c:563
jl_f__apply_latest at /Users/osx/buildbot/slave/package_osx64/build/src/builtins.c:601
#invokelatest#1 at ./essentials.jl:691 [inlined]
invokelatest at ./essentials.jl:690 [inlined]
print_response at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/REPL/src/REPL.jl:154
unknown function (ip: 0x109ead87e)
print_response at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/REPL/src/REPL.jl:139
unknown function (ip: 0x109ead2ab)
do_respond at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/REPL/src/REPL.jl:708
unknown function (ip: 0x109e96124)
jl_apply at /Users/osx/buildbot/slave/package_osx64/build/src/./julia.h:1538 [inlined]
jl_f__apply at /Users/osx/buildbot/slave/package_osx64/build/src/builtins.c:563
jl_f__apply_latest at /Users/osx/buildbot/slave/package_osx64/build/src/builtins.c:601
#invokelatest#1 at ./essentials.jl:691 [inlined]
invokelatest at ./essentials.jl:690 [inlined]
run_interface at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/REPL/src/LineEdit.jl:2261
run_frontend at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/REPL/src/REPL.jl:1029
run_repl at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v0.7/REPL/src/REPL.jl:191
#831 at ./logging.jl:311
jfptr_#831_5851.clone_1 at /Applications/Programmieren/Julia-0.7.app/Contents/Resources/julia/lib/julia/sys.dylib (unknown line)
jl_apply at /Users/osx/buildbot/slave/package_osx64/build/src/./julia.h:1538 [inlined]
jl_f__apply at /Users/osx/buildbot/slave/package_osx64/build/src/builtins.c:563
jl_f__apply_latest at /Users/osx/buildbot/slave/package_osx64/build/src/builtins.c:601
#invokelatest#1 at ./essentials.jl:691 [inlined]
invokelatest at ./essentials.jl:690 [inlined]
macro expansion at ./logging.jl:308 [inlined]
run_main_repl at ./client.jl:340
exec_options at ./client.jl:252
_start at ./client.jl:432
true_main at /Applications/Programmieren/Julia-0.7.app/Contents/Resources/julia/bin/julia (unknown line)
main at /Applications/Programmieren/Julia-0.7.app/Contents/Resources/julia/bin/julia (unknown line)
Allocations: 66064400 (Pool: 66051866; Big: 12534); GC: 149
In some other cases, I got the error messages
On worker 2: The Python TensorFlow package could not be imported. You must install Python TensorFlow before using this package. error at ./error.jl:33
or
UndefVarError: py_gradients not defined
Unfortunately, I can’t reproduce them after reinstalling everything and starting from scratch. I would assume that these errors are related to the current issues with PyCall.jl and Conda.
Are there any ideas on how to get the package to work?

Unable to quit debug/browser mode

In Rstudio console, I ran the following commands
> debug(ls)
> ls()
Then, I could not quit/leave the debug mode. I tried the following:
1, "Q" + "enter key"
2, "c" + "enter key"
3. the red "Stop" button.
but it does not leave debug mode.
Browse[2]> Q
debugging in: ls(.Internal(getNamespaceRegistry()), all.names = TRUE)
debug: {
if (!missing(name)) {
Error: unable to quit when browser is active
Browse[2]> c
exiting from: ls(.Internal(getNamespaceRegistry()), all.names = TRUE)
debugging in: ls(.Internal(getNamespaceRegistry()), all.names = TRUE)
debug: {
if (!missing(name)) {
.....
The Rstudio version: Version 0.98.1060 – © 2009-2013 RStudio, Inc.
The R version: R i386.3.1.1
Has anyone had this issue in Rstudio?
Thanks.
Ang
The problem is that as soon as you leave debug mode, something is triggering a call to ls, which puts you back in debug mode. To fix the issue, turn off debugging for ls before you leave the debugger:
Browse[2]> undebug(ls)
Browse[2]> Q
Consider using debugonce rather than debug to avoid getting into these kinds of loops.

Cannot read unicode .csv into R

I have a .csv file, which contains the following data:
"Ա","Բ"
1,10
2,20
I cannot read it into R so that the column names are displayed like they are in the file.
d <- read.csv("./Data/1.csv", fileEncoding="UTF-8")
head(d)
Produces the following:
> d <- read.csv("./Data/1.csv", fileEncoding="UTF-8")
Warning messages:
1: In read.table(file = file, header = header, sep = sep, quote = quote, :
invalid input found on input connection './Data/1.csv'
2: In read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeader on './Data/1.csv'
> head(d)
[1] X.
<0 rows> (or 0-length row.names)
Meanwhile, doing the same without specifying the fileEncoding produces this:
> d <- read.csv("./Data/1.csv")
> head(d)
Ô. Ô²
1 1 10
2 2 20
When I run the "file" utility to find out the encoding of the file, it says it is UTF-8:
Data\1.csv: UTF-8 Unicode text, with CRLF line terminators
I am using RStudio, Windows 7, R version 2.15.2, 32-bit.
Thanks in advance.
I wrote a longer answer on the same issue here: R on Windows: character encoding hell .
Quick answer, using the parameter encoding instead of fileEncoding should fix your first issue. You will not be able to read it possibly in either console or table view in RStudio, but you will be able to use it in formulaes.
d <- read.csv("./Data/1.csv", encoding="UTF-8")
head(d)
Having saved your table into a UTF-8 file:
> test2 <- read.csv("test2.csv", header = FALSE, sep = ",", quote = "\"", dec = ".", fill = TRUE, comment.char = "", encoding = "UTF-8")
Warning message:
In read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeader on 'test2.csv'
This gives you how it looks like in the console and RStudio view
> test2
V1 V2
1 <U+0531> <U+0532>
2 1 10
3 2 20
However importantly you are able to manipulate this within R. Thus in my case it is possible to see that the script window input Ա has UTF-8 encoding, and a grep correctly finds this encoding in your table.
> Encoding("Ա")
[1] "UTF-8"
> grep("Ա", as.character(test2[1,1]))
[1] 1
You may need to find suitable encoding variants that work on your settings, or possibly change them. Unfortunately I am not sure where it is done.
You might not be able to make it pretty in all stages, but it is definitely possible to get it to work also in Windows 7 environment.
I tried two ways to replicate your problem.
I copied the characters above into RStudio, saved it to a csv with this code:
write.csv(c("Ա","Բ",
1,10,
2,20), "test.csv")
df <- read.csv("test.csv")
This worked fine.
Then I thought, well maybe R is cheating when I save it to CSV with R? So I just pasted the characters to a text file and save it as a CSV. This approach doesn't have problems either.
Here's my session info:
sessionInfo()
R version 3.0.1 (2013-05-16)
Platform: x86_64-pc-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_CA.UTF-8 LC_NUMERIC=C LC_TIME=en_CA.UTF-8
[4] LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8
[7] LC_PAPER=C LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats4 grid stats graphics grDevices utils datasets methods base
other attached packages:
[1] party_1.0-9 modeltools_0.2-21 strucchange_1.4-7 sandwich_2.2-10 zoo_1.7-10
[6] GGally_0.4.4 reshape_0.8.4 plyr_1.8 ggplot2_0.9.3.1
loaded via a namespace (and not attached):
[1] coin_1.0-23 colorspace_1.2-2 dichromat_2.0-0 digest_0.6.3
[5] gtable_0.1.2 labeling_0.2 lattice_0.20-23 MASS_7.3-29
[9] munsell_0.4.2 mvtnorm_0.9-9995 proto_0.3-10 RColorBrewer_1.0-5
[13] reshape2_1.2.2 scales_0.2.3 splines_3.0.1 stringr_0.6.2
I had the same problem and found out that the file was corrupted.
I opened the file with OpenOffice and saved it back using "UTF8" character set (you need to click the edit filter settings box) and then imported it with the read.csv()(no encoding or filencoding option) and it worked fine.

Resources