I'm trying to put together a custom buildpack with NodeJS and the CouchBase module/libraries
I've gotten as far as using Vulcan to build libcouchbase and libvbucket and getting the buildpack to retrieve and unpack the tgz files for both.
Everything looks ok there, but I receive errors when npm tries to install the couchbase module:
I get a bunch of errors, but this line:
"../src/couchbase_impl.h:52:36: warning: libcouchbase/couchbase.h: No such file or directory"
leads me to think that it can't find the libcouchbase libraries (which is possible since they aren't in the usual place).
I've tried to add the correct path using CPPFLAGS="-I/app/vendor/couchbase/include/libcouchbase" in both the Config Vars and just exporting that as part of the compile phase, but still no luck.
Here is the gist with the Heroku deploy output and the compile/release buildpack files:
https://gist.github.com/ahamidi/5620503
Any help would be greatly appreciated.
Thanks,
Ali
[Update 1]
I've made some progress and I can now get the slug to compile when deploying to Heroku.
The key was figuring out the ENV Variables that CouchNode looks for when adding custom directories to include.
In this case, the Env Variables were EXTRA_CPPFLAGS and EXTRA_LDFLAGS.
So I updated the compile file to include the following:
export EXTRA_CPPFLAGS="-I$BUILD_DIR/vendor/couchbase/include"
export EXTRA_LDFLAGS="-L$BUILD_DIR/vendor/couchbase/lib -Wl,-rpath,$BUILD_DIR/vendor/couchbase/lib"
The slug compiles and the app is deployed, but I now get a different error in the logs:
Error: libcouchbase.so.2: cannot open shared object file: No such file or directory
So it looks like Node can't see the libcouchbase libraries directory.
For anyone who is curious or experiencing a similar issue, here's what worked for me.
In order to get the couchbase npm module to install I had to tell it where to find the libcouchbase libraries (in compile file):
export EXTRA_CPPFLAGS="-I$BUILD_DIR/vendor/couchbase/include"
export EXTRA_LDFLAGS="-L$BUILD_DIR/vendor/couchbase/lib -Wl,-rpath,$BUILD_DIR/vendor/couchbase/lib"
Then in order to require couchbase in my app I had to set the following Env Variable:
LD_LIBRARY_PATH="/app/vendor/couchbase/lib:$LD_LIBRARY_PATH"
With the command:
heroku config:add LD_LIBRARY_PATH="/app/vendor/couchbase/lib:$LD_LIBRARY_PATH"
You should set CPPFLAGS="-I/app/vendor/couchbase/include" LDFLAGS="-L/app/vendor/couchbase/include -lcouchbase"
from your script it seems like you just unpacking libcouchbase without any further work. you should also build it and install. typical magic spell for node.js client will be ./configure --disable-plugins --disable-examples && make && sudo make install. I'm not sure if sudo part needed on heroku, probably just make install
Related
I'm trying to deploy my app to heroku and I keep getting this error:
The required namespace "react" is not available, it was required by "reagent/core.cljs".
But I have
"react": "17.0.2-0",
"react-dom": "17.0.2-0",
"react-highlight.js": "1.0.7",
all in my package.json and I also put
[cljsjs/react "17.0.2-0"]
[cljsjs/react-dom "17.0.2-0"]
in my project.clj. I also did npm install react. I'm not sure what I'm doing wrong?
It compiles fine using shadow-cljs to my localhost, but it won't compile when I try to push to heroku. Any idea what I'm missing?
Editing to add some more details:
I made a bin/build file based on this blog post, even though I'm not using Fulcro: https://folcon.github.io/post/2020-04-12-Fulcro-on-Heroku/
I created a bin/build file that say this:
#!/usr/bin/env bash
npm install
npx shadow-cljs release main
clojure -A:depstar -m hf.depstar.uberjar fulcro.jar
I added this to my shadow-cljs.edn file:
;; v-- and this!
:release {:compiler-options {:optimizations :advanced}}}
And it also said to add something to my deps.edn file, but I don't have one so I didn't.
I also did the buildpack step to add the clojure and nodejs buildpacks, although I'm not using nodejs to my knowledge.
I had the same problem, it was very easy to fix.
You need to first specify buildpacks nodejs, and after it only clojure.
from here:
npm install react react-dom create-react-class
I am having trouble understanding how the -g flag works in NPM. Specifically I'm struggling to understand how it relates to command-line functionality exposed by NPM modules.
I assumed that the difference between installing a package locally and globally was simply that a local package would not be available outside of the particular project. And of course that a globally installed package would be available in any project. I'm from a Rails background so this for me would be similar to installing a gem into a particular RVM versus installing it into the global RVM. It would simply affect which places it was available.
However there seems to be more significance than just scope in NPM. For packages that have command-line functionality, like wait-on, the package (as far as I can tell) is not available on the command line unless it's installed globally.
Local install doesn't make the command-line functionality available:
$ npm install wait-on
$ wait-on
=> -bash: /usr/local/bin/wait-on: No such file or directory
Global install does expose the command-line functionality
$ npm install wait-on -g
$ wait-on
=> Usage: wait-on {OPTIONS} resource [...resource]
Description:
wait-on is a command line utility which will wait for files, ports,
sockets, and http(s) resources to become available (or not available
using reverse flag). Exits with success code (0) when all resources
are ready. Non-zero exit code if interrupted or timed out.
Options may also be specified in a config file (js or json). For
example --config configFile.js would result in configFile.js being
required and the resulting object will be merged with any
Can you expose the command-line functionality using a local install?
Is it possible to install locally but also get the command line functionality? This would be very helpful for my CI setup as it's far easier to cache local modules than global modules so where possible I'd prefer to install locally.
If you are using npm 5.2.0 or later, the npx command is included by default. It will allow you to run from the local node modules: npx wait-on
For reference: https://www.npmjs.com/package/npx
I think you can access locally installed modules from the command line only if you add them to your "scripts" section of your package.json. So to use the locally installed version of wait-on, you can add an entry in "scripts" section of package.json like so "wait-on": "wait-on". Then to run it, you would have to do npm run wait-on. You can also do "wo": "wait-on" and then do npm run wo basically meaning what comes after the run is the script entry. In node_modules, there is a .bin folder and inside of this folder is all the executables that you can access this way.
Installing locally makes the package available to the current project (where it stores all of the node modules in node_modules). This is usually only good for using a module like so var module = require('module'); or importing a module.
It will not be available as a command that the shell can resolve until you install it globally npm install -g module where npm will install it in a place where your path variable will resolve this command.
You can find a pretty decent explanation here.
It is also useful to put commands in the scripts block in package.json as it automatically resolve local commands. That means you could have a script that depended on a package without having an undocumented dependency on the same.
If you need to run it locally on cmd, you have to go inside the node_modules and run from the path.
i want to use wkhtmltopdf in my php application.
therefor i added wkhtmltopdf to my apt.yml file and hoped that everything will work...
...unfortunately, it doesn't.
everytime i run wkhtmltopdf google.ch output.pdf i get the following error:
wkhtmltopdf: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory
does anybody know how to setup wkthtmltopdf correct in the php-builtpack of cloud foundry?
Two possibilities:
You are missing shared libraries dependencies. You'll need to add those to apt.yml so they get installed as well. It looks like libgl1-mesa-dev might be what you're missing. There could be others though. If you run ldd wkthtmltopdf, you can see a list of all the dependencies & what's missing.
The dependencies are installed, but they're not found when you try to run wkthtmltopdf. If you're running cf ssh to go into an app container so you can run wkthtmltopdf this might be the issue. Try running cf ssh "<app-name>" -t -c "/tmp/lifecycle/launcher /home/vcap/app bash ''" instead. Otherwise, you need to manually source the .profile.d/* scripts. Buildpacks set env variables in these scripts and they often indicate where shared libraries can be loaded.
Hope that helps!
I manged to get my spring boot website online on Heroku. But I also use wkhtmltopdf to create a pdf. This works locally but now I have some problems.
Offline it works as follow :
ProcessBuilder pb = new ProcessBuilder
("cmd.exe",
"/c",
" cd C:\\Program Files\\wkhtmltopdf\\bin && wkhtmltopdf.exe "
+ "http://google.com C:\\MainWebApps\\TestApp\\src\\main\\resources\\userstorage\\Google2.pdf");
But how do I install this on Heroku?
Where do I store the temporarily html page so I can create a pdf from it ?
And where is wkhtmltopdf installed on Heroku ?
Can I call the wkhtmltopdf with a processbuilder on heroku?
EDIT
So after the comment of ceejayoz I googled a bit more and did find some interesting stuff.
So for Compile the binaries on Heroku I used this:
heroku run /bin/bash
Then I did a curl of wkhtmltopdf like this:
curl -O http://download.gna.org/wkhtmltopdf/0.12/0.12.0/wkhtmltox-linux-amd64_0.12.0-03c001d.tar.xz
Then I tried to extract it on the server but without success:
$ tar -xjvf wkhtmltox-linux-amd64_0.12.0-03c001d.tar.xz
tar (child): wkhtmltox-linux-amd64_0.12.0-03c001d.tar.xz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
EDIT2
I also found this https://github.com/dscout/wkhtmltopdf-buildpack on github.
So I did following :
heroku buildpacks:set 'https://github.com/heroku/heroku-buildpack-multi.git'
echo 'https://github.com/dscout/wkhtmltopdf-buildpack.git' >> .buildpacks
This created a file named .buildpacks but how do I proceed from there on ?
I also found this post but vulcan is deprecated and uses ruby
Using Wkhtmltopdf with Nodejs on Heroku
Can somebody provide me with good information because I am completely stuck with this?
You actually have two problems that you need to solve -
How to install/invoke the executable
How to handle the generated .pdf
Assuming you have the basics of Heroku deployment (push to the Heroku git remote), for #1, #ceejayoz is right - check the binary into your git repository. For example, under a ./bin directory. The root of your project (where the Procfile is) will be your working directory, and you should be able to invoke the program with ProcessBuilder using relative paths.
Caveat - since it looks like you are developing on Windows, you will need to pay attention to ensuring both platform-specific binaries are available, and add some logic to know which one to invoke (for example, by setting/checking a specific environment variable).
I recommend against trying to build with a custom build pack - you will spend a lot of energy for little to no benefit. Aside from the platform issue, you don't need to rebuild a third party tool when your code changes...
The second problem is that you can't leave the generated PDF in place. It will go away when the dyno is restarted (see https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem). Thus, the first thing you should do when the process completes is move the generated file to an external storage service (Amazon S3 is a good starting point).
Hope this helps.
You might want to use wkhtmltopdf-binary. With that solution, you do not need to put wkhtmltopdf executable into your VCS. You can use it for example with Maven or Gradle.
OK. Here's the problem and it's driving me crazy!!!
I followed the instruction online, installed hadoop and when running the text it said snappy local library can't be loaded.
It's said I have to install snappy first and then install hadoop-snappy.
I download snappy-1.0.4 from google code and do the following:
cd ../snappy-1.0.4
./configure
make
sudo make install
Then it's the problem when:
mvn package -Dsnappy.prefix=/usr/local
The post online said by default the snappy should be installed in the /usr/local.
But I got the following error and no matter what I change the path, still get erro:
The goal you specified required a project to execute but there's no POM in the directory. Please verify you invoked the maven from the correct directory.
It's the wrong directory of mvn? Or improper of snappy? And it said lack of pom that should be a .xml that in no where I can find..
Please help!
Alright, so looking at that page, you are in the wrong directory.
The directory you should be in for that step is "hadoop-snappy" which you can see has a pom.xml, you can verify by looking at the github, https://github.com/electrum/hadoop-snappy.
So after you follow these steps from the guide you showed me.
Download it(hadoop-snappy) from GitHub
Install libtool, make sure ‘libtoolize’ works
Install Maven 3 if necessary
Change your directory to hadoop-snappy and run the command you were trying before.