Heroku install specific Tex Buildpacks - heroku

My latex file uses many packages which are imported with \usepackage.
On my local computer the packages are being correctly interpreted and the tex-file is compiled. When deployed on Heroku it stops running and I get plenty of error messages that some packages are missing.
For example: cmap.sty, etoolbox.sty, pdfx.sty, and many more. I started to add the files to my working directory. There are so many of them and it seems there should be an easier solution.
At the moment I am using a tex-live buildpack. I added the link to my Heroku build-packs, it does the job for simple latex files without any special packages required.
I created the file texlive.packages and added it to my repo. The file contains:
collection-bibtexextra
collection-fontsextra
collection-langgerman
collection-xetex
The deployment to Heroku is rejected because the slugfile excites more than 1.2G and only 500Mb are allowed.
These collections are quite big, and I can install max 2 of them.
I know that these collections containing a lot of packages I don't need.
How can I install only the required packages for my use case? Or how can I exclude some of them.

Related

RPM Build, how to get the path where the RPM package been "rpm -ivh" at shell script running by '%post' macro

I'm a newbie in rpm build. and i did my best i can to describe the little complicated question with my amature english...
i have a script(.sh) with some code,what the script do is to setup the code,and it need some user input.
sadly i found out scripts running by rpm can not get user input.
and i know that's not right usage. i'm not trying to get user input
anymore.
my question is:
i'm now trying to get those input with a config file along with the rpm package,but i don't know how to get the rpm package path at the SPEC file macros or the script file running by SPEC file macros.
rpm packages are not supposed to "adapt" themselves to user input. I would recommend you to make sure the installation of the package is always the same. Once the package is installed, you can tell users how to configure the program.
Take git for example: it provides /etc/gitconfig which contains the default packaged configuration. Users can then make their changes to the configuration and save those in ~/.gitconfig. Thus the user configuration is separated from the packaged configuration, so you can keep updating git without losing your configuration.

Google Cloud Functions and shared libraries

I'm trying to use wkhtmltopdf on GCF for PDF generation.
When my function tries to spawn the child process I get the following error:
Error: ./services/wkhtmltopdf: error while loading shared libraries: libXrender.so.1: cannot open shared object file: No such file or director
The problem is clearly due to the fact that wkhtmltopdf binary depends on external shared libraries which are not installed in GCF environment.
Is there a way to solve this issue or should I give up and use other solutions (AWS Lambda o GAE)?
Thank you in advance
Indeed, I’ve found a way to solve this issue by copying all required libraries in the same folder (/bin for me) containing wkhtmltopdf binary. In order to let the binary file use uploaded libraries I added the following lines to wkhtmltopdf.js:
wkhtmltopdf.command = 'LD_LIBRARY_PATH='+path.resolve(__dirname, 'bin')+' ./bin/wkhtmltopdf';
wkhtmltopdf.shell = '/bin/bash';
module.exports = wkhtmltopdf;
Everything worked fine for a while. At a sudden I receive many connection errors from GCF or timeouts but I think it’s not related to my implementation but rather to Google.
I’ve ended up setting a dedicated server.
I have managed to get it working, there are 2 things needed to be done, as wkhtmltopdf won't work if:
libXrender.so.1 can't be loaded
you are using stdout to collect resulting pdf. Wkhtmltopdf has to write the result into a file
First you need to obtain correct version of libXrender.
I have found out, which docker image Cloud functions are using as base for nodejs functions. I've ran it locally, installed libxrender and copied the library into my function's directory.
docker run -it --rm=true -v /tmp/d:/tmp/d gcr.io/google-appengine/nodejs bash
Then, inside the runing container:
apt update
apt install libxrender1
cp /usr/lib/x86_64-linux-gnu/libXrender.so.1 /tmp/d
I have put this into my function's project directory and under lib sub directory. In my function's source file, I then set-up LD_LIBRARY_PATH to include the /user_code/lib directory (/user_code is the directory, where at last your function will end up being put by google):
process.env['LD_LIBRARY_PATH'] = '/user_code/lib'
This is enough for wkhtmltopdf to be able to execute. It will fail, as it won't be able to write to stdout and the function will eventually timeout and be killed (as Matteo experienced). I think this is because google runs the containers without a tty (just speculation), I can run my code in their container, if I run it with docker run -it flags. To solve this, I am invoking wkhtmltopdf so that it writes the output into a file under /tmp (this is in-memory tmpfs). I then read the file back and send it as my response body. Note that the tmpfs might be reused between function calls, so you need to use unique file every time.
This seems to do the trick and I am able to run wkhtmltopdf as Google CloudFunction.

Where files are allocated in Parse Server?

I'm implementing an instance of Parse Server, I want know where the Parse Server Allocated the files ?
According to File Adapter, the default file storage is GridFS in mongodb.
Depends on the operating system and type of installation you used.
If installed on a linux/unix using the global install npm install -g parse-server mongodb-runner then your parse-server files will normally be under usr/lib/node_modules/parse-server. ( may differ from linux versions )
be careful when editing these files for hot hacks or modifications. If you later choose to upgrade parse-server they will be overwritten.
Your cloud file directly is normally created by you. So this could be home/parse/cloud/main.js. This can be in any location of your choice. To set a new location you will set that in the index file or json (depending on your startup process ).
cloud: '/home/myApp/cloud/main.js', // Absolute path to your Cloud Code
If you installed not using the global install, then obviously you would need to cd to where you cloned the project.
Windows would be similar. Clone (or download the zip) parse-server from the repo. Open a console window and “cd” to the folder where you have cloned/extracted the example server, eq:
cd "C:\parse-server"
Here is where the files will sit on the parse-server. Hopes this helps!

How to setup Pydevd remote debugging with Heroku

According to this answer I am required to copy the pycharm-debug.egg file to my server, how do I accomplish this with a Heroku app so that I can remotely debug it using Pycharm?
Heroku doesn't expose the File system it uses for running web dyno to users. Means you can't copy the file to the server via ssh.
So, you can do this by following 2 ways:
The best possible way to do this, is by adding this egg file into requirements, so that during deployment it gets installed into the environment hence automatically added to python path. But this would require the package to be pip indexed
Or, Commit this file in your code base, hence when you deploy the file reaches the server.
Also, in the settings file of your project if using django , add this file to python path:
import sys
sys.path.append(relative/path/to/file)

TWiki install/config problems

Debian Etch/Apache 1.3
I have one server happily running TWiki, and want to replicate it on a 2nd server. Apt-get install twiki runs OK except for an apache2 failure. It does appear to have worked out it need to use apache 1.3 though I could be persuaded otherwise!
However, when I got to
myhost/twiki
it successfully goes to /twiki/bin/view.pl/XXX/WebHome but returns
The requested URL /twiki/bin/view.pl/XXX/WebHome was not found on this server.
Apache log shows
File does not exist: /var/www/packages/twiki/bin/view.pl/XXX/WebHome
On the working system, there is no .pl extension on the 'view' which may be vital. Also, I can't see why the packages get installed to the packages dir, but I have moved twiki dir under this. Not sure if the apache config needs to be changed.
It would make life simpler if I could remove the 'packages' dir from things but can't see how.
Any help on this appreciated!
Thanks,
Martin
mmm, I stopped maintaining the twiki packages when we forked the project to foswiki.
however. I don't recal ever putting the twiki files into /var/www/packages.
where are you getting the twiki packages from ?
http://fosiki.com/blog/2007/04/22/debian-repository-for-twiki/ are probably more up to date (than the deb in debian's own repository) - and includes plugins, but i've not used / tested anything since 2008 - but that repo does include all the plugins from twiki.org.
i think your dependancy on apache 1.3 issue is because this is a quite old debian package -
http://distributedinformation.com/experimental/dists/experimental/main/binary-i386/Packages depends on apache2
if you're rolling your own, there's an apache cfg generator topic on twiki.org somewhere - we've made many updates to our version on foswiki.org
Sven

Resources