I'm trying to run the YourKit agent inside the container. But when I run my application I get: Could not find agent library /home/jboss/app/libyjpagent.so in absolute path, with error: libc.musl-x86_64.so.1: cannot open shared object file: No such file or directory
How can I install additional libc libs needed for YourKit to run in Quarkus with Jib?
YourKit agent is statically linked. It has no additional dependencies. Does your Alpine is 64 bit? What is optput of ldd -v /home/jboss/app/libyjpagent.so ?
Related
Using Windows 10 64-bit, Cabal-3.4.0.0, ghc-8.10.7.
I installed OpenBLAS in MSYS2 environment with command
pacman -S mingw-w64-x86_64-openblas.
Than, I successfully installed hmatrix-0.20.2 with command
cabal install --lib hmatrix --flags=openblas --extra-include-dirs="C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\bin" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\lib"
I am trying to build simple test project using cabal build cabalhmatrix with Main
module Main where
import Numeric.LinearAlgebra
main :: IO ()
main = do
putStrLn $ show $ vector [1,2,3] * vector [3,0,-2]
But now I am getting output
Resolving dependencies...
Build profile: -w ghc-8.10.7 -O1
In order, the following will be built (use -v for more details):
- hmatrix-0.20.2 (lib) (requires build)
- cabalhmatrix-0.1.0.0 (exe:cabalhmatrix) (first run)
Starting hmatrix-0.20.2 (lib)
Failed to build hmatrix-0.20.2. The failure occurred during the configure
step.
Build log (
C:\cabal\logs\ghc-8.10.7\hmatrix-0.20.2-6dd2e8f2795550e4dd624770ac98c326dacc0cac.log
):
Warning: hmatrix.cabal:21:28: Packages with 'cabal-version: 1.12' or later
should specify a specific version of the Cabal spec of the form
'cabal-version: x.y'. Use 'cabal-version: 1.18'.
Configuring library for hmatrix-0.20.2..
cabal-3.4.0.0.exe: Missing dependencies on foreign libraries:
* Missing (or bad) C libraries: blas, lapack
This problem can usually be solved by installing the system packages that
provide these libraries (you may need the "-dev" versions). If the libraries
are already installed but in a non-standard location then you can use the
flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.If
the library files do exist, it may contain errors that are caught by the C
compiler at the preprocessing stage. In this case you can re-run configure
with the verbosity flag -v3 to see the error messages.
cabal-3.4.0.0.exe: Failed to build hmatrix-0.20.2 (which is required by
exe:cabalhmatrix from cabalhmatrix-0.1.0.0). See the build log above for
details.
What should I do to correctly build that package?
I guess I need to somehow pass arguments --flags=openblas --extra-include-dirs="C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\bin" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\lib" to hmatrix during compilation, but don't know how to do that. To be honest, I don't understand for what program exactly are those arguments (cabal, ghc, ghc-pkg or something else) and why cabal is trying to install hmatrix again. I see hmatrix in directory "C:\cabal\store\ghc-8.10.7\hmatrix-0.20.2-e917eca0fc7690010007a19f4f2a3602d86df0f0".
Created cabal.project file:
packages: .
package hmatrix
flags: +openblas
extra-include-dirs: C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS
extra-lib-dirs: C:\\ghcup\\msys64\\mingw64\\bin, C:\\ghcup\\msys64\\mingw64\\libenter code here
After adding libopenblas.dll location to PATH variable cabal project is working.
Even though there is the --lib flag, it's generally best to work under the assumption that Cabal doesn't do library installs. Never install a library, instead just depend on it – and have Cabal install, update etc. it whenever necessary.
But then how can you pass the necessary flags? With a cabal.project file.
packages: .
package hmatrix
flags: openblas
extra-include-dirs: C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS
...
Put this file in the working directory of your own project, together with cabalhmatrix.cabal. Then running cabal build in that directory will use a hmatrix install with the suitable library etc. flags.
I'm trying to debug my application on the target device. I'm using gdb-multiarch on the host and gdb-server on the target. Everything works fine with breakpoints and stepping.
My Problem is that my application is dependent on some of my on libs. When I step into a function of these libs the source code is not found. There is no /usr/src in my recipe-sysroot folder.
Steps I have done:
Generate eSDK: bitbake my-image -c populate_sdk_ext
Installed the eSDK to ~/my_sdk
devtool modify -n my-recipe ~/repositories/my-application
devtool build my-recipe
When I check the my-image.tar.gz generated by bitbake then I can find all the /usr/src folder. But the eSDK and devtool don't extract the files to the recipe-sysroot. How can I get this done? Is there an option I'm missing?
did you try to use devtool add to add the source ? in yocto the packages are usually debug free and source free, for example net-utils have net-util, netutil-dbg, net-util-dev and net-util-src.
I want to take the GCC Compiler that is on my machine and all its dependencies and zip them up in a deployment package that I can send off to AWS Lambda (That way I can use a Lambda to compile C code). Is there an easy way to package the whole thing in one go so I can deploy and use it from AWS Lambda?
This is what I have right now
However when I invoke the function I get
"gcc: error trying to exec 'cc1': execvp: No such file or directory\n"
as the response. Currently the way I compile gcc and those dependencies you see on the left panel was by spinning up a Amazon Linux docker container, installing gcc, and then zipping up gcc and its dependencies I found with the ldd command.
AWS Lambda runtime is described here. Basically, it's Amazon Linux. If I were you, I would try to grab the specified AMI and create a EC2 instance with it. Or just create an Amazon Linux 2 EC2 instance. Then I would log in to that instance and compile the binaries you need. Finally, I would export them in a ZIP file and ship with Lambda. This way chances are high that the binaries would work on Lambda.
Great question!
As you say, just packaging the binary doesn't help because you're missing the shared objects .so files, or some other dependencies. You can find your dependencies by running something like ldd, and this question helps. Projects like yumda, try to simplify this, and definitely worth a shot.
But under the hood, Lambda uses AmazonLinux and there's really no reason it can't be done. High level points we need are:
Build the binary in a AmazonLinux container
Determine the binary, and it's dependencies
Copy the binary and dependencies out of the container into a lambci container
Test out the lambci container (you'd typically need to set some env vars for this to work, e.g. $LD_LIBARY_PATH.
Once it runs, package that as a zip, and load it into your lambda, remembering to set the right env vars
As an option, I'd package gcc as a layer, so you can share it.
When i searched around, it looks like someone has done exactly this here. Hopefully it's exactly what you're looking for.
Configure, build and install gcc into a specific directory, specified by --prefix option to configure.
After installing, change gcc spec file, so that it hardcodes -rpath into executables and shared libraries, so that you do not need to tinker with LD_LIBARY_PATH (which is a wrong solution most of the time) to make the executables find the right libstdc++.so, libgcc_s.so and its friends.
rsync the directory onto another machine into the same place in filesystem.
Or archive the install directory and unpack it on your target machine.
However, the target must should have the same libc and system libraries gcc was built with, otherwise that may not work as intended.
Alternatively, build locally and deploy your executables with all their dependencies using Exodus.
i want to use wkhtmltopdf in my php application.
therefor i added wkhtmltopdf to my apt.yml file and hoped that everything will work...
...unfortunately, it doesn't.
everytime i run wkhtmltopdf google.ch output.pdf i get the following error:
wkhtmltopdf: error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory
does anybody know how to setup wkthtmltopdf correct in the php-builtpack of cloud foundry?
Two possibilities:
You are missing shared libraries dependencies. You'll need to add those to apt.yml so they get installed as well. It looks like libgl1-mesa-dev might be what you're missing. There could be others though. If you run ldd wkthtmltopdf, you can see a list of all the dependencies & what's missing.
The dependencies are installed, but they're not found when you try to run wkthtmltopdf. If you're running cf ssh to go into an app container so you can run wkthtmltopdf this might be the issue. Try running cf ssh "<app-name>" -t -c "/tmp/lifecycle/launcher /home/vcap/app bash ''" instead. Otherwise, you need to manually source the .profile.d/* scripts. Buildpacks set env variables in these scripts and they often indicate where shared libraries can be loaded.
Hope that helps!
I'm trying to build a Haskell Stack project whose extra-deps includes opencv, which in itself depends on OpenCV 3.0 (presently only buildable from source).
I'm following the Docker integration guidelines, and using my own image which builds upon fpco/stack-build:lts-9.20 and installs OpenCV 3.0 (Dockerfile.stack.opencv).
If I build my image I can confirm that opencv is installed and visible to pkg-config:
$ docker build -t stack-opencv -f Dockerfile.stack.opencv .
$ docker run stack-opencv pkg-config --modversion opencv
3.3.1
However if I specify this image in my stack.yml:
docker:
image: stack-opencv
Attempting to stack build yields:
Configuring opencv-0.0.2.0...
setup: The pkg-config package 'opencv' version >=3.0.0 is required but it
could not be found.
I've run the build without the Docker integration, and it completes successfully.
The Dockerfile is passing CMAKE_INSTALL_PREFIX=$HOME/usr.
When running docker build the the root user is used, and thus $HOME is set to /root.
However when doing stack build the stack user is used, they do not have permission to see /root, and thus pkg-config cannot find opencv.
By removing the -D CMAKE_INSTALL_PREFIX=$HOME/usr flag from cmake, the default prefix (/usr/local) is used instead. This is also accessible to the stack user, and thus pkg-config can find it during a stack build.