Change vignette handling? - rstudio

I'd like to write a vignette in Sweave or knitr and edit it in the RStudio IDE, but I don't want to use the standard code to weave it, I'd like some pre-preprocessing (with my patchDVI package). Is there a way to replace the code that does the .Rnw to .tex translation?
One idea I thought of for this was to create a new non-Sweave vignette engine, and declare it in the document (using %\VignetteEngine), but RStudio ignores the declaration, and just runs the regular Sweave() function, which fails.

I'm not aware of a way to customize the weaving code in RStudio.
The best thing I can think of is to put your own weaving code in a Makefile, place the RStudio IDE and your PDF viewer side by side. Then use something like a shell script to compile the vignette, e.g.
while true;
do sleep 1 && make vignette;
done
A sample Makefile:
vignette: foo.pdf
%.pdf: %.Rnw
your_own_command && R CMD Sweave $<

Related

Using the m4 pre-processor to the rstudio tool chain for modular Rmd documents

We need to modularise some of our lab documents so that standard pieces of text can be kept in one place (a library of test methods), and which we can pull in as necessary (test method descriptions, boiler plate terms and conditions etc).
I have had a look at bookdown but it seems over-complex for what we need to do. I know that I could use the m4 macro processor to pull in multiple .Rmd files using a Makefile, but our staff are used to using rstudio to build their documents.
i.e., I can do this in the top level document:
changequote([', ]')
undivert([../../Library/testmethod1summary.Rmd])
and use the Makefile to preprocess the top-level file and include all subsequent files. I've done something like this with pandoc previously:
%.pdf: %.md
m4 $< > _tmp.md
pandoc _tmp.md -o $#
Is there any way to add the m4 program into the rstudio build pipeline so that you can use the usual rstudio knitR commands (SHIFT-CTRL-K or SHIFT-CTRL-B) ?
Pete

Including #foo preprocessor directives at compile time (GNU tools)

I've currently run in such a problem, in fact caused by the package maintainer(s), who simply did not consider that a certain preprocessor definition was not available until version X of a certain toolkit package required in the dependencies (which is currently in testing stage). It was fixable by simply adding an additional #define to a header file in the base system, making the project compile fine again.
However, what if I had no root access to the system? Could I also add a #define new_macro "i am from the future" at compile time, e. g. to configure?
When reading myself through the matter, I thought that it might maybe work with the DEFS environment variable, but apparently this is not meant to be used for C preprocessor directives.
So can this be accomplished at all?
Thanks, but unfortunately a huge problem is the strings in quotes
Create a file, for example at ~/somedir/mycompiler with content:
#!/bin/sh
gcc -Dnew_macro="i am from the future" "$#"
add executable permissions chmod +x ~/somedir/mycompiler and then pass that as parameter to configure:
./configure CC="$HOME"/somedir/mycompiler ...
Configure script in turn will use that script to compile everything, passing -D everywhere, and quotes will be properly parsed by sh.

cmake: How to add a add_custom_command that just executes a shellscript?

When I use the classic gnu Make I put in post build actions like flash the device (if it is a embedded device) and other similiar actions. The actual flashing is usually hidden behind a little sctipt or some commands.
Then I can type something like
make flash
so I first build the code and then it ends up on the target.
The classic Makefile could have something like in it:
.PHONY: flash
flash: main.bin
scripts/do_flash.pl main.bin
But how do I add this kind of post build actions to a cmake build?
How do I add a "custom command" that just executes a shellscript?
This questions talks about add_custom_command:
The question cmake add custom command feels like it is close,
but the add_custom_command seems to need a "output file" to work.
But in this case there is something happening, not generated.
What would I put in the CMakeLists.txt to add such a custom action?
/Thanks
For reference, a link into the cmake documentation on this topic
http://www.cmake.org/cmake/help/cmake2.6docs.html#command:add_custom_target
Try this:
add_custom_target(flash
COMMAND ${PERL_EXECUTABLE} ${CMAKE_SOURCE_DIR}/scripts/do_flash.pl ${MAIN_BIN_FILE}
DEPENDS ${MAIN_BIN_FILE}
)

Xcode -- Expand all macros in a file, without doing a full precompile?

I am trying to read some code that has a lot of macros in it. And often the macros are chained. Is there any way to see a version of the file where all the macros have been expanded -- without doing a full run of the preprocessor (which would also do stuff like expand #imports)? This would really help me read the code.
EDIT: Often the macros are defined in other files.
Not sure if there's a way to do this in Xcode, but you can use the compiler, specifically the -E option, which stops processing right after preprocessing.
cc -E foo.c
will print all the preprocessed results on stdout. And
cc -E foo.c -o foo.preproc
will dump the preprocessed output into foo.preproc.
As best I can tell, the answer to my question is that there is no way to do it. The best I can do is do a full precompile, then search for the part of the file that starts after all the #include statements.

Change older makefile system to take advantage of parallel compiles

We use Microsoft NMAKE to compile a large number of native C++ and some Intel Fortran files. Typically the makefiles contains lines such as this (for each file):
$(LINKPATH)\olemisc.obj : ole2\olemisc.cpp $(OLEMISC_DEP)
$(CCDEBUG) ole2\olemisc.cpp
$(GDEPS) ole2\olemisc.cpp
OLEMISC_DEP =\
e:\ole2\ifaceole.hpp\
e:\ole2\cpptypes.hpp\
etc.
It works fine, but compiles one file at a time. We would like to take advantage of multi core processors and compile more than one file at a time. I would appreciate some advice about the best way to make that happen, please. Here is what I have so far.
One: GNU make lets you execute parallel jobs using the --jobs=2 option for example and that works fine with GCC (we cant use GCC sadly). But Microsoft's NMAKE does not seem to support such an option. How compatible would the two name programs be, and if we did start using GNU MAKE, can you run two cl.exe processes at the same time? I would expect them to complain about the PDB (debug) file being locked, or does one of the newer cl.exe command line arguments get you around that?
Two: cl.exe has a /MP (build with multiple processes) flag, which lets you compile multiple files at the same time if passed together via the command line, for example:
cl /MP7 a.cpp b.cpp c.cpp d.cpp e.cpp
But using this would require changes to the makefile. Our make files are generated by a our own program from other files, so I can easily change what we put in the makefiles. But how do you combine the dependencies from different cpp files together in the makefile so they get compiled together via one cl.exe call? Each .obj is a different target with a set of commands to make it?
Or do I change the makefile to not call cl.exe, but rather some other little executable that we write, which then collects a series of .cpp files together and shells out to cl.exe passing multiple arguments? That would work and seems doable, but also seems overly complicated and I cant see anyone else doing that.
Am I missing something obvious? There must be a simpler way of accomplishing this?
We do not use Visual Studio or a solution file to do the compiles, because the list of files is extensive, we have a few special items in our makefiles, and theoretically do not want to be overly tied to MS C++ etc.
I thoroughly recommend GNU make on windows. I tend to use cygwin make as the environment it creates tends to be very portable to Unix-like platforms (Mac and Linux for a start). Compiling using the Microsoft toolchain, in parallel and with 100% accurate dependencies and CPU usage works very well. You have other requirements though.
As far as your nmake question goes, look up batch-mode inference rules in the manual. Basically, nmake is able to call the C compiler once, passing it a whole load of C files in one go. Thus you can use the compiler's /MP... type switches.
Parallel compiling built into the compiler? Pah! Horribly broken I say. Here is a skeleton anyway:
OBJECTS = a.obj b.obj c.obj
f.exe: $(OBJECTS)
link $** -o $#
$(OBJECTS): $$(#R).c
# "The only syntactical difference from the standard inference rule
# is that the batch-mode inference rule is terminated with a double colon (::)."
.c.obj::
cl -c /MP4 $<
EDIT
If each .obj has its own dependencies (likely!), then you simply add these as separate dependency lines (i.e., they don't have any shell commands attached).
a.obj: b.h c.h ../include/e.hpp
b.obj: b.h ../include/e.hpp
∶
Often such boiler plate is generated by another tool and !INCLUDEd into the main makefile. If you are clever, then you can generate these dependencies for free as you compile. (If you go this far, then nmake starts to creak at the seams and you should maybe change to GNU make.)
One further consideration to keep in mind here is this: You basically have to define one batch rule for each path and extension. But if you have two files with the same name in two different source directories with a batch inference rule for both of those directories, the batch rule might not pick the one you want.
Basically the make system knows it needs to make a certain obj file, and as soon as it finds an inference rule that lets it do that, it will use it.
The work around is to not have duplicate named files, and if that cant be avoided, dont use inference or batch rules for those files.
Ok, I spent some time this morning working on this, and thanks to bobbogo, I got it to work. Here are the exact details for anyone else who is considering this:
Old style makefile which compiles one file at a time has tons of this:
$(LINKPATH)\PS_zlib.obj : zlib\PS_zlib.cpp $(PS_ZLIB_DEP)
$(CC) zlib\PS_zlib.cpp
$(LINKPATH)\ioapi.obj : zlib\minizip\ioapi.c $(IOAPI_DEP)
$(CC) zlib\minizip\ioapi.c
$(LINKPATH)\iowin32.obj : zlib\minizip\iowin32.c $(IOWIN32_DEP)
$(CC) zlib\minizip\iowin32.c
Note that each file is compiled one at a time. So now you want to use the fancy Visual Studio 2010 /MP switch "/MP[n] use up to 'n' processes for compilation" to compile multiple files at the same time. How? Your makefile needs to make use of batch inference rules in nmake, as follows:
$(LINKPATH)\PS_zlib.obj : zlib\PS_zlib.cpp $(PS_ZLIB_DEP)
$(LINKPATH)\ioapi.obj : zlib\minizip\ioapi.c $(IOAPI_DEP)
$(LINKPATH)\iowin32.obj : zlib\minizip\iowin32.c $(IOWIN32_DEP)
#Batch inference rule for extension "cpp" and path "zlib":
{zlib}.cpp{$(LINKPATH)}.obj::
$(CC) $(CCMP) $<
#Batch inference rule for extension "c" and path "zlib\minizip":
{zlib\minizip}.c{$(LINKPATH)}.obj::
$(CC) $(CCMP) $<
In this case, elsewhere, we have
CCMP = /MP4
Note that nmake inference batch rules do not support wildcards or spaces in the paths. I found some decent nmake documentation somewhere that states that you need to create a separate rule for every extension and source file location, you can not have one rule if the files are in the different locations. Also, files that use #import can not be compiled with /MP.
We have a tool that generates our makefiles, so it now also also generates the batch inference rules.
But it works! The time to compile one large dll went from 12 minutes down to 7 minutes! Woohoo!

Resources