Quarto rendering pdf (running xelatex) faster in Rstudio - rstudio

I am wondering if it is possible to render quarto document faster.
When I render my document I get these messages in the "Background Jobs" window:
running xelatex - 1
This is XeTeX, Version 3.141592653-2.6-0.999994 (TeX Live 2022) (preloaded format=xelatex)
restricted \write18 enabled.
entering extended mode
running xelatex - 2
This is XeTeX, Version 3.141592653-2.6-0.999994 (TeX Live 2022) (preloaded format=xelatex)
restricted \write18 enabled.
entering extended mode
running xelatex - 3
This is XeTeX, Version 3.141592653-2.6-0.999994 (TeX Live 2022) (preloaded format=xelatex)
restricted \write18 enabled.
entering extended mode
All the way until:
running xelatex - 10
This is XeTeX, Version 3.141592653-2.6-0.999994 (TeX Live 2022) (preloaded format=xelatex)
restricted \write18 enabled.
entering extended mode
WARNING: maximum number of runs (9) reached
Output created:
Does anyone know what those 10 "running xelatex mean ? Is there a way to decrease this number of runs and what is the impact on the final rendering ?

Quarto generates PDF via LaTeX by default. The LaTeX processors sometimes require multiple runs, as they step through the document sequentially and need a another run to, for example, update the table of contents after seeing another heading. Usually two runs are enough, but some LaTeX packages require more runs to get the correct output. Ten runs is quite unusual though, there might be something wrong elsewhere.
Quarto uses a heuristic to determine if it needs to run the LaTeX engine once more. Quarto also installs any missing LaTeX packages automatically. However, the tool latexmk is probably better in determining when to do additional calls to xelatex. You can use this tool with Quarto:
---
format:
pdf:
pdf-engine: latexmk
pdf-engine-opt: -xelatex
---
If the document still requires 10 runs, then there might be an issue with a LaTeX package somewhere.

Related

Validating an offline installation for Visual Studio (2019 or 2022) created with the --layout option

So my question is: how to validate/verify the layout for an offline situation while ensuring that no network connection will be necessary in the offline installation scenario (i.e. at least --noweb)?
NB: to be certain I also like to turn off network anyway while validating (from within a VM), but the idea behind --noweb appears to be that.
Background
I prefer to create an offline installation for Visual Studio, which combines the different editions in one .iso (UDF) file. It works generally nicely thanks to the duplication across editions which mkisofs can deduplicate via -duplicates-once; and packers will be able to achieve the same if they know how to handle hardlinks, after a treatment with hardlink or dfhl or similar tools. The resulting .iso for VS 2022 (17.3.6) for example is a mere 36 GiB in size, including the editions: Build Tools ("28 GiB"), Community ("35 GiB"), Professional ("35 GiB") and Enterprise ("35 GiB"). The hardlinking process saves a little over 100 GiB altogether.
Since I typically get at least a handful of of download errors during a single run, I tend to run the initial vs_<Edition>.exe --layout %CD%\vs2022\<Edition> --lang en-us command at least twice until I see the final success message. Twice is usually sufficient to get to see that.
However, now I would like to make sure that each individual layout is truly valid for offline installation. Alas, the help page isn't exactly helpful for the scenario and the command I came up with doesn't seem to do anything.
Executed from cmd.exe (no matter if elevated or not) and from within the directory specified in --layout during preparation:
.\vs_setup.exe --layout %CD% --verify --noweb --passive --lang en-us
NB: I also tried with --nocache, without --passive and without --lang en-us (the original layout was generated only for that language, so I assumed it has to be given).
In all cases I briefly see a dialog come up with a progress bar indicating that stuff gets loaded and unpacked into %LOCALAPPDATA%\Temp (makes sense given the read-only media), but then there is silence and the respective process appears to quit without doing anything. So I don't even get an indication of what I may have invoked incorrectly. I also checked the event log but returned empty-handed.
I am asking the question specifically for VS 2019 and 2022, but the bootstrappers seem to be largely unified anyway. So pick one of those versions to answer.
PS: Alternatively it would also help if you showed me how to turn up verbosity so that I can diagnose why the invoked program quietly quits.
Tried the following both while the network was connected and not; where X: was the mounted .iso file. All ended up failing silently without any indication of what's wrong as per the above description. No UAC elevation prompt appeared either.
X:\Professional\vs_setup.exe --layout X:\Professional --verify --noweb --lang en-us
X:\Enterprise\vs_setup.exe --layout X:\Enterprise --verify --noweb --lang en-us
X:\BuildTools\vs_setup.exe --layout X:\BuildTools --verify --noweb --lang en-us
X:\Community\vs_setup.exe --layout X:\Community --verify --noweb --lang en-us
I tried this on two different VMs, and also with an already elevated prompt as well as a normal prompt. The results were the same as described in the initial question for all combinations (i.e. editions, offline/online, elevated/not).
Found the underlying issue. As it so happens, mounting an .iso file means it gets mounted read-only. That seems to be the single defining factor here.
Exhibit #1 (my scenario with read-only layout directory):
Yes, I could have left some more seconds or minutes of still image at the end, but take my word there is nothing happening there.
Exhibit #2 (same contents but copied to a writable location)
So to conclude: this appears to be a defect or - euphemistically - an undocumented design decision. It is beyond me why verification/validation of data would require more than read-only access, though.
Does this what you need:
I used the following command line:
.\vs_setup.exe --layout C:\Demo --verify --noweb --lang en-us

How to turn off CLANG diagnostics in Rstudio with Rcpp?

Some time ago I enabled Clang diagnostics in Rstudio for Rcpp.
I don't remember exactly how, but it was some line to start it for either here or on another site.
Now every time I edit Rcpp code I get constant Clang updates in the console such as:
clang version 5.0.2 (tags/RELEASE_502/final)
Target: x86_64-pc-windows-msvc
TOTAL MEMORY: 41 mb (cpp1exception.cpp)
PERFORMANCE 285 ms (cpp1exception.cpp)
The real problem is that this diagnostic seems to slow down input. I type something, anything, and the the GUI, Rstudio seems to pause, until the Clang output finishes.
So I simply want to turn of the Diagnostics or make it how it was before.
Update:
The code to turn it on was found here: Rstudio no autocomplete with Rcpp Armadillo?
Specifically the line .rs.setClangDiagnostics(2).
Once I found that I used .rs.setClangDiagnostics(2) after some searching I found that I need to simply use:
.rs.setClangDiagnostics(0)
To turn it off, and it did.

Windows Performance Analyzer Missing ImageId Event

I have an application, that I want to profile using Windows Performance Analyzer. It all works, but I don't get any reasonable stack traces from my application.
The application in question is a demo application. This is to give me a good feeling if all checks out. Then I want to profile another application. Since I have full control over my demo application, I included some marker functions, that should show up in the stack trace.
When running the application on Windwos 71, Process Explorer shows the correct stack trace for the part, that I want to profile. Here is the stack trace with the marker functions in lines 7 - 9:
Since I installed all performance analytics tools insinde a Windows 10 VM2, I started profiling there. The first thing to notice: Process Explorer does not show the correct stack trace. The marker functions that I implemented are nowhere to be found.
Nevertheless, I recorded performance traces using UIforETW and Windows Performance Recorder. When opening them in WPA and focussing on the target application, this is the stack trace:
All the information, I'm interested in is missing. The stack shows up as <Application>.exe!<Missing ImageId event>
What did I do wrong?
If this gives you a hint, here is the relevant software, that is installed:
1: The Windows 7 computer has Visual Studio (C#) installed.
2: The Windows 10 VM dowsn't have Visual Studio, but has WinDBG (Preview) and Windows Performance Toolkit installed.
I tagged delphi, because the target application is written in Delphi.
The Windows 10 WPA (as well as Windows 8.1, to a lesser extent) dropped support for older debug symbol formats; it now only supports the "RSDS" format that has been standard since MSVC 7. PE files using older symbol file formats (for example, VB6 generates NB10 PDB files) will result in that "Missing ImageId event" error.
(The message itself is technically incorrect; there likely is an ImageId even in the trace file but it is looking for an ImageId/DbgID_RSDS event, which can't be generated for non-RSDS PDBs)
<Missing ImageId event> will also be reported when the session is not merged with the "NT Kernel Logger" which provides some information necessary to resolve the symbols.
The "proper" way to stop the session is:
xperf.exe -stop my_trace -stop -d merged_trace.etl
Note that second -stop in necessary to stop another session (implicitly "NT Kernel Logger"), and -d to merge both into the merged_trace.etl.

Showing VIM makeprg progress on Windows

When VIM to used make builds using GNU make utility, there are two issues I see with the default configuration.
Background execution without freezing the editor.
Showing execution progress like emacs compile/grep command
Background execution is possible with simple !start or with plugins like dispatch or AsyncExecute etc
None of these options show the progress in a scratch window with warnings/errors emitted during build progress.
Is there anything I am missing ?
Searching the web took me to shellpipe/tee workaround which does not seem to work on Windows even after installing tee.exe
Vim only parses the :make output after the command has finished.
If you launch the build asynchronously, you'd also have to periodically read the resulting output and tell Vim to parse it via :cfile errorfile. There may be a plugin that provides such auto-reload logic, but I'm not aware of any.
In general, there's very little asynchronicity and parallelism built into Vim (possibly due to its age and implementation in C).

Fonts in pdf documents screwed up when generated with latex (specifically, pdflatex) on mac osx

My colleague suggests that texniscope is somehow to blame and that I should try purging it from my system. I really hope not to have to resort to that!
Possible clues:
This wasn't an issue till I upgraded to Leopard.
When I say the fonts are screwed up, I mean the main text looks like maybe it's the default mac system font, and all math is completely unreadable. Basically all special symbols are completely garbled.
I installed latex from here: http://www.tug.org/mactex/. I had already had texniscope installed.
When I run /usr/texbin/pdflatex foo.tex, it seems to work:
This is pdfTeXk, Version 3.1415926-1.40.9 (Web2C 7.5.7)
%&-line parsing enabled.
entering extended mode
...
but the resulting pdf file has screwed up fonts.
The same thing happens both with pdflatex on the command line, or using TeXShop.
Apple knows about the problem and isn't planning to fix it (I had a faculty member spend a lot of time testing and submit a bug to Apple). Their claim is that PDFTeX is embedding the fonts incorrectly, and they have fixed the Apple PDF library to be more strict about what it will and won't accept, which means that you will continue to see problems with PDF documents created with PDFTeX in Preview, TeXShop, or other tools that display PDF using Apple's PDF engine. Unfortunately, they weren't at all clear about exactly what it is that PDFTeX is doing wrong, which makes fixing it or even reporting the bug to the PDFTeX developers problematic. Note that Adobe's Acrobat or Reader applications can often display these documents without any problems; presumably Adobe's error-checking is more liberal than Apple's.
You can actually recover from this problem without rebooting, although you may see it recur with the same document in the same session. You need to run
atsutil server -shutdown
which will kill the Apple Type Services server daemon (ATSServer) and spawn a new instance, coincidentally rebuilding its cache files.
TUG recently released updated binaries fixing the bug that triggers the font cache corruption : http://www.tug.org/mactex/fontcache/
It seems I found the answer, from http://www.stat.duke.edu/~dmm36/tech.php, pasted below. Alas, it appears I have to give up TeXniscope. I like TeXniscope much better than Skim because it's much simpler, has better keyboard shorcuts for paging, and Skim makes you manually refresh the pdf every time there's a latex error (otherwise Skim auto-refreshes).
Quoted from http://www.stat.duke.edu/~dmm36/tech.php:
After recently upgrading to Leopard, something very strange and terrible began happening with pdf files created by latex (MacTeX 2007 distribution). The punchline is that fonts were not being displayed correctly by any application that used Apple's native pdf engine (e.g. preview.app, skim.app, Texniscope.app, LaTeXit.app, but not adobe reader 8). More mysterious was the fact that the same document could render differently on multiple openings.
Much googling ensued, until I found a thread on the mac tex newsgroup which suggested that the problem lay in corrupted font caches. Another search brought about this hint on how to delete all font caches in Leopard. From the terminal, issue the following commands:
sudo rm -rf `lsof | grep com.apple.ATS/annex.aux | grep Finder | cut -c 66-139`
(replace lsof with /usr/sbin/lsof if /usr/sbin is not in your path)
sudo rm -rf /private/var/folders/*/*/-Caches-/com.apple.ATS
And then reboot. This fixed the font problem for me.
NB: part of this problem appears to be the result of TeXniscope.app screwing up the font cache. For example, if you delete the font cache, reboot, and open something in preview it will look fine, but as soon as you open something in TeXniscope again, back to the drawing board. If you are experiencing this problem and using TeXniscope as your pdf previewer, (as in aquamacs), you should switch to Skim as your pdf previewer. It's pretty nice, and the Skim wiki has instructions for how to integrate it with Aquamacs. TeXniscope isn't under active development anyway.
This bug has driven me nuts. Inspired by this hint, here is the best way I found to cope with it, namely executing the following sequence in a shell:
atsutil databases -removeUser
sudo atsutil databases -remove
atsutil server -shutdown
atsutil server -ping
You may add it this sequence in a shell function in your shell config file (mine is .zshrc):
function atsrm()
{
atsutil databases -removeUser
sudo atsutil databases -remove
atsutil server -shutdown
atsutil server -ping
}
...and simply call atsrm in a terminal to purge the font cache. Be aware that Skim will crash if it was open, and some application may display some characters improperly, so you will have to restart them.
Look at the pdf in Adobe Reader under the document properties. If you have Type 3 (?) bitmap fonts for the math, you need to tell the driver to embed the proper Type 1 vector fonts into the resulting document.
I use latex with dvips then pdf on linux. It used to be I had to tell it to do this, but now it seems at least the package on ubuntu has the proper font setting.
Look on the web to tell you how to embed the proper fonts into the document.
On second thought, maybe you don't have any of the fonts installed on your system or none of your fonts are being embedded into the document.
I'm a bit surprised by your problem with MacTeX. I recently installed the 2008 version and it is working like a charm, be it pdftex/latex or xetex/latex. Even with the previous teTeX installed I had, fonts were not a problem. Can you put your foo.tex somewhere for us to test?

Resources