knit Rmarkdown moderncv to pdf using makefile with sty file in subdirectory - makefile

I am using the moderncv class to create a CV in Rmarkdown. In order to make the cv reproducible out of the box I have included the .cls and .sty files in the root directory. However, in an effort to keep the root directory uncluttered I would prefer to keep all the moderncv related files in a subdirectory (assets/tex/). I am able to access the .cls file using a relative path in the yaml front matter, but I am not able to access the .sty files unless they are in the root directory.
Searching previous questions on stackoverflow I learned the following: (1) keeping .cls and .sty files in nested directories is not recommended. I understand this and would like to do it anyway so that other people can fork my project and be able to knit the cv without having to deal with finding their texmk folder. (2) the solution to my problem seems to involve setting the TEXINPUTS using a Makefile (see this thread and another thread)
I am not very good with Makefiles, but I have managed to get one working that will knit my .Rmd file to pdf without problems, so long as the .sty files are still in root. This is what it looks like currently:
PDF_FILE=my_cv.pdf
all : $(PDF_FILE)
echo All files are now up to date
clean :
rm -f $(PDF_FILE)
%.pdf : %.Rmd
Rscript -e 'rmarkdown::render("$<")'
My understanding is that I can set the TEXINPUTS using:
export TEXINPUTS=".:./assets/tex:"
Where "assets/tex" represents the subdirectory where the .sty files are located. I do not know how to incorporate the above code into my makefile so that the .sty files are recognized in the subdirectories and my .Rmd is knit to PDF. In its current state, I get the following error if I remove the .sty files from root and put then in the aforementioned subdirectory:
! LaTeX Error: Command \fax already defined.
Or name \end... illegal, see p.192 of the manual.
which I assume is occurring because the moderncv class needs---and cannot locate---the relevant .sty files.

You could try to define the environment variable in the make rule:
%.pdf : %.Rmd
export TEXINPUTS=".:./assets/tex:"
Rscript -e 'rmarkdown::render("$<")'
Or you could set the environment variable in a set-up chunk in your Rmd file:
```{r setup, include = FALSE}
Sys.setenv(TEXINPUTS=".:./assets/tex:")
```
Note: Not tested due to lack of minimal example.

Related

Unable to load/require file from Lua running from Atom in Windows

I'm trying to use Atom to run a Lua script. However, when I try to load files via the require() command, it always says it's unable to locate them. The files are all in the same folder. For example, to load utils.lua I have tried
require 'utils'
require 'utils.lua'
require 'D:\Users\Mike\Dropbox\Lua Modeling\utils.lua'
require 'D:\\Users\\Mike\\Dropbox\\Lua Modeling\\utils.lua'
require 'D:/Users/Mike/Dropbox/Lua Modeling/utils.lua'
I get errors like
Lua: D:\Users\Mike\Dropbox\Lua Modeling\main.lua:12: module 'D:\Users\Mike\Dropbox\Lua Modeling\utils.lua' not found:
no field package.preload['D:\Users\Mike\Dropbox\Lua Modeling\utils.lua']
no file '.\D:\Users\Mike\Dropbox\Lua Modeling\utils\lua.lua'
no file 'D:\Program Files (x86)\Lua\5.1\lua\D:\Users\Mike\Dropbox\Lua Modeling\utils\lua.lua'
no file 'D:\Program Files (x86)\Lua\5.1\lua\D:\Users\Mike\Dropbox\Lua Modeling\utils\lua\init.lua'
no file 'D:\Program Files (x86)\Lua\5.1\D:\Users\Mike\Dropbox\Lua Modeling\utils\lua.lua'
The messages says on the first line that 'D:\Users\Mike\Dropbox\Lua Modeling\utils.lua' was not found, even though that is the full path of the file. What am I doing wrong?
Thanks.
The short answer
You should be able to load utils.lua by using the following code:
require("utils")
And by starting your program from the directory that utils.lua is in:
cd "D:\Users\Mike\Dropbox\Lua Modeling"
lua main.lua
The long answer
To understand what is going wrong here, it is helpful to know a little bit about how require works. The first thing that require does is to search for the module in the module path. From Programming in Lua chapter 8.1:
The path used by require is a little different from typical paths. Most programs use paths as a list of directories wherein to search for a given file. However, ANSI C (the abstract platform where Lua runs) does not have the concept of directories. Therefore, the path used by require is a list of patterns, each of them specifying an alternative way to transform a virtual file name (the argument to require) into a real file name. More specifically, each component in the path is a file name containing optional interrogation marks. For each component, require replaces each ? by the virtual file name and checks whether there is a file with that name; if not, it goes to the next component. The components in a path are separated by semicolons (a character seldom used for file names in most operating systems). For instance, if the path is
?;?.lua;c:\windows\?;/usr/local/lua/?/?.lua
then the call require"lili" will try to open the following files:
lili
lili.lua
c:\windows\lili
/usr/local/lua/lili/lili.lua
Judging from your error message, your Lua path seems to be the following:
.\?.lua;D:\Program Files (x86)\Lua\5.1\lua\?.lua;D:\Program Files (x86)\Lua\5.1\lua\?\init.lua;D:\Program Files (x86)\Lua\5.1\?.lua
To make that easier to read, here are each the patterns separated by line breaks:
.\?.lua
D:\Program Files (x86)\Lua\5.1\lua\?.lua
D:\Program Files (x86)\Lua\5.1\lua\?\init.lua
D:\Program Files (x86)\Lua\5.1\?.lua
From this list you can see that when calling require
Lua fills in the .lua extension for you
Lua fills in the rest of the file path for you
In other words, you should just specify the module name, like this:
require("utils")
Now, Lua also needs to know where the utils.lua file is. The easiest way is to run your program from the D:\Users\Mike\Dropbox\Lua Modeling folder. This means that when you run require("utils"), Lua will expand the first pattern .\?.lua into .\utils.lua, and when it checks that path it will find the utils.lua file in the current directory.
In other words, running your program like this should work:
cd "D:\Users\Mike\Dropbox\Lua Modeling"
lua main.lua
An alternative
If you can't (or don't want to) change your working directory to run the program, you can use the LUA_PATH environment variable to add new patterns to the path that require uses to search for modules.
set LUA_PATH=D:\Users\Mike\Dropbox\Lua Modeling\?.lua;%LUA_PATH%;
lua "D:\Users\Mike\Dropbox\Lua Modeling\main.lua"
There is a slight trick to this. If the LUA_PATH environment variable already exists, then this will add your project's folder to the start of it. If LUA_PATH doesn't exist, this will add ;; to the end, which Lua fills in with the default path.

Failing to open a file which should be in the base path

I have a Go project (bazel-remote) that tries to read a yaml file passed in the command line, when built with bazel. This yaml file sits in the same location from where I run the bazel run command.
But it fails to run because os.Open fails with no such file or directory.
I printed the basePath using os.Getwd, because someone suggested that my basePath might be set wrong. But my basePath is set to a location in my /private/var/tmp/ where the bazel objects are created and stored:
/private/var/tmp/bazel/312feba8ddcde6737ae7dd7ef9bc2a5a/execroot/main/bazel-out/darwin-fastbuild/bin/darwin_amd64_static_pure_stripped/bazel-remote.runfiles/main'
How do I set my basePath correctly? Why is my basePath set to where it is?
Binaries started with bazel run are executed in an internal Bazel directory. They'll have access to "runfiles", which are files mentioned in the data attribute of the binary rule or its dependencies. For example, if you have a rule like the one below, you'll be able to read foo.txt, but not bar.txt or other files:
load("#io_bazel_rules_go//go:def.bzl", "go_binary")
go_binary(
name = "hello",
srcs = ["hello.go"],
data = ["foo.txt"],
)
Note that the working directory of the binary corresponds to the repository root directory, not the directory where the binary is defined. You can debug with os.Getwd and filepath.Walk.
You mentioned you wanted to access a yaml file passed in on the command line though. Presumably, you want to be able to access any file the user passes in, not just files mentioned in the data attribute. For this case, take a look at the BUILD_WORKING_DIRECTORY environment variable (bazel run sets this). That gives the path to the directory where bazel run was invoked. Also, BUILD_WORKSPACE_DIRECTORY is the path to the workspace root directory.

How do I get PyCharm File Watcher to maintain my directory structure for SCSS output

I am currently working on getting automatic SCSS -> CSS conversion set up using PyCharm's File Watcher functionality. I am able to have the files output to another directory, but I cannot get them to do it relative to a specific directory. Currently, I have the following settings and relevant file tree:
Tree
|media/
|-c/
| |-css/
| |-folder/
| | |-file2.css
| --file.css
--src/
|-css/
|-folder/
| |-file2.scss
--file.scss
File Watcher Settings
Scope is the media/src/css/ directory and all subdirectories recursively
Arguments is --no-cache --update $FileName$:$ProjectFileDir$/media/c/$FileDirRelativeToProjectRoot$/$FileNameWithoutExtension$.css
Working directory is $ProjectFileDir$/media/src/css/
Output paths to refresh is $ProjectFileDir$/media/c/$FileDirRelativeToProjectRoot$/$FileNameWithoutExtension$.css
With these settings, when I update file2.scss, there is an error stating that media/c/media/src/css/folder does not exist, which is not where I want the file anyway.
The issue that I am having is that I would like to have all paths relative to the working directory root preserved (ie. media/src/css/folder -> media/c/css/folder, but all of my source SCSS files are under multiple folder levels from the project root and the tutorial only specifies how to maintain folder structure if you are compiling directly below the root, not a folder below the root. Does anyone know a way that my folder structure could be preserved so that anything under media/src/css would have the same relative output in media/c/css?
The CrazyCoder posted solution in another Question. It is hard to find, so I'm linking it. https://stackoverflow.com/a/15965088/2047157
Quoting:
The trick is to use $FileDirPathFromParent(dir)$ macro:

Using CMake, how can I concat files and install them

I'm new to CMake and I have a problem that I can not figure out a solution to. I'm using CMake to compile a project with a bunch of optional sub-dirs and it builds shared library files as expected. That part seems to be working fine. Each of these sub-dirs contains a sql file. I need to concat all the selected sql files to one sql header file and install the result. So one file like:
sql_header.sql
sub_dir_A.sql
sub_dir_C.sql
sub_dir_D.sql
If I did this directly in a make file I might do something like the following only smarter to deal with only the selected sub-dirs:
cat sql_header.sql > "${INSTALL_PATH}/somefile.sql"
cat sub_dir_A.sql >> "${INSTALL_PATH}/somefile.sql"
cat sub_dir_C.sql >> "${INSTALL_PATH}/somefile.sql"
cat sub_dir_D.sql >> "${INSTALL_PATH}/somefile.sql"
I have sort of figured out pieces of this, like I can use:
LIST(APPEND PACKAGE_SQL_FILES "some_file.sql")
which I assume I can place in each of the sub-dirs CMakeLists.txt files to collect the file names. And I can create a macro like:
CAT(IN "${PACKAGE_SQL_FILES}" OUT "${INSTALL_PATH}/somefile.sql")
But I am lost between when the CMake initially runs and when it runs from the make install. Maybe there is a better way to do this. I need this to work on both Windows and Linux.
I would be happy with some hints to point me in the right direction.
You can create the concatenated file mainly using CMake's file and function commands.
First, create a cat function:
function(cat IN_FILE OUT_FILE)
file(READ ${IN_FILE} CONTENTS)
file(APPEND ${OUT_FILE} "${CONTENTS}")
endfunction()
Assuming you have the list of input files in the variable PACKAGE_SQL_FILES, you can use the function like this:
# Prepare a temporary file to "cat" to:
file(WRITE somefile.sql.in "")
# Call the "cat" function for each input file
foreach(PACKAGE_SQL_FILE ${PACKAGE_SQL_FILES})
cat(${PACKAGE_SQL_FILE} somefile.sql.in)
endforeach()
# Copy the temporary file to the final location
configure_file(somefile.sql.in somefile.sql COPYONLY)
The reason for writing to a temporary is so the real target file only gets updated if its content has changed. See this answer for why this is a good thing.
You should note that if you're including the subdirectories via the add_subdirectory command, the subdirs all have their own scope as far as CMake variables are concerned. In the subdirs, using list will only affect variables in the scope of that subdir.
If you want to create a list available in the parent scope, you'll need to use set(... PARENT_SCOPE), e.g.
set(PACKAGE_SQL_FILES
${PACKAGE_SQL_FILES}
${CMAKE_CURRENT_SOURCE_DIR}/some_file.sql
PARENT_SCOPE)
All this so far has simply created the concatenated file in the root of your build tree. To install it, you probably want to use the install(FILES ...) command:
install(FILES ${CMAKE_BINARY_DIR}/somefile.sql
DESTINATION ${INSTALL_PATH})
So, whenever CMake runs (either because you manually invoke it or because it detects changes when you do "make"), it will update the concatenated file in the build tree. Only once you run "make install" will the file finally be copied from the build root to the install location.
As of CMake 3.18, the CMake command line tool can concatenate files using cat. So, assuming a variable PACKAGE_SQL_FILES containing the list of files, you can run the cat command using execute_process:
# Concatenate the sql files into a variable 'FINAL_FILE'.
execute_process(COMMAND ${CMAKE_COMMAND} -E cat ${PACKAGE_SQL_FILES}
OUTPUT_VARIABLE FINAL_FILE
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
)
# Write out the concatenated contents to 'final.sql.in'.
file(WRITE final.sql.in ${FINAL_FILE})
The rest of the solution is similar to Fraser's response. You can use configure_file so the resultant file is only updated when necessary.
configure_file(final.sql.in final.sql COPYONLY)
You can still use install in the same way to install the file:
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/final.sql
DESTINATION ${INSTALL_PATH})

Intltool with an autoconf-generated .desktop file

In the Emperor project, I'm having some issues getting intltool to work when doing an out-of-tree build. When running make check out-of-tree, which is one of the things make distcheck does, intltool fails thus:
INTLTOOL_EXTRACT="/usr/bin/intltool-extract" XGETTEXT="/usr/bin/xgettext" srcdir=../../po /usr/bin/intltool-update --gettext-package emperor --pot
can't open ../../po/../data/emperor.desktop.in: No such file or directory at /usr/bin/intltool-extract line 212.
intltool is looking for emperor.desktop.in, which is listed in po/POTFILES.in, in the source tree. However, emperor.desktop.in is generated by the configure script from a file called emperor.desktop.in.in, in order to insert the installed executable path as configured by the user, and lands in the build tree.
These are the relevant bootstrap.sh lines:
echo +++ Running intltoolize ... &&
intltoolize --force --copy &&
cat >>po/Makefile.in.in <<EOF
../data/_column_names.h:
cd ../data && \$(MAKE) _column_names.h
EOF
The setup code in configure.ac:
IT_PROG_INTLTOOL([0.35.0])
GETTEXT_PACKAGE=emperor
AC_SUBST(GETTEXT_PACKAGE)
AC_DEFINE_UNQUOTED([GETTEXT_PACKAGE], ["$GETTEXT_PACKAGE"],
[The domain to use with gettext])
AM_GLIB_GNU_GETTEXT
data/emperor.desktop.in is listed in AC_CONFIG_FILES.
data/Makefile.am contains these lines:
desktopdir = $(datadir)/applications
desktop_in_files = emperor.desktop.in
desktop_DATA = $(desktop_in_files:.desktop.in=.desktop)
#INTLTOOL_DESKTOP_RULE#
and po/POTFILES.in contains the line
data/emperor.desktop.in
You can review all the details in the public git repository if you wish.
Can I somehow tell intltool that this file will be located in the build tree, not in the source tree? Otherwise, my options appear to be to break make distcheck (not a great option), or to ship a desktop file that doesn't include the full path and assumes that the executable is installed in the PATH. (just as messy, IMHO) - Any other options?
In your source code you have emperor.desktop.in.in, which does not seem to be in any rule as a dependency. That file has to be converted first to emperor.desktop.in and later to emperor.desktop, which does not seem to be the case in your data/Makefile.am.
desktopdir = $(datadir)/applications
desktop_in_in_files = emperor.desktop.in.in
desktop_in_files = $(desktop_in_in_files:.desktop.in.in=.desktop.in)
desktop_DATA = $(desktop_in_files:.desktop.in=.desktop)
#INTLTOOL_DESKTOP_RULE#
[...]
EXTRA_DIST = \
$(desktop_in_in_files) \
[...]
$(desktop_in_in_files) contains $(desktop_in_in_files), and Makefile will know how to deal with that.
Some further digging has brought me believe that the answer is: intltool does not support source files that aren't source files in the project. Ergo, any additional processing must be done after intltool is through
Intltool requires the lines in POTFILES to be relative to the (build-time) working directory. The file POTFILES is generated by the configure script from POTFILES.in with a simple sed script defined in the IT_PO_SUBDIR autoconf macro (called by IT_PROG_INTLTOOL) that simply prepends the relative location of the top-level source directory to the paths. Alas, modifying POTFILES does not help: the intltool-extract script does everything it can to get the source directory right. I don't believe files that are sometimes inside and sometimes outside the source tree can be supported without modifying intltool itself.

Resources