UnsatisfiedLinkError when switch to Temurin Alpine from AdoptOpenJDK Alpine - glibc

My project has shard libraries we load into Java via System.load(). Previously, we were using the AdoptOpenJDK Alpine docker image, but we are now trying to migrate to the Eclipse Temurin Alpine image. With the former image, everything works, but when trying to switch to the new image, the System.load() throws a UnsatisfiedLinkError for an image that should have already been loaded. My guess is this has to do with the swap from glibc to musl, but it seems odd that musl wouldn't work right out of the box like glibc did.
Example:
System.load("/path/first.so");
System.load("/path/second.so"); // UnsatisifiedLinkError: /path/first.so: Error loading shared library second.so: No such file or directory (needed by /path/first.so)

Related

unable to run librdkafka=1.3.0 over docker

I was trying to run librdkafka version 1.3.0 from alpine distribution over my docker container using this:
FROM golang:1.13.6-alpine3.10 as base
RUN apk add --no-cache --update librdkafka=1.3.0 librdkafka-dev=1.3.0 --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/community
but got this error while building image:
librdkafka-1.4.2-r0:
breaks: world[librdkafka=1.3.0]
satisfies: librdkafka-dev-1.4.2-r0[librdkafka=1.4.2-r0]
librdkafka-dev-1.4.2-r0:
breaks: world[librdkafka-dev=1.3.0]
Can someone tell me what might be possibly wrong here?
The librdkafka package has been upgraded to 1.4.2.
In Alpine repositories, as opposed to Ubuntu for example, old package versions are not kept. This is mostly done for security reasons, AFAICT. When a package is upgraded, the old version is gone for good. This has the unfortunate side effect of breaking images that depend on specific package versions.
The currently available librdkafka 1.X versions on Alpine repositories are 1.4.2 (edge, 3.12), 1.2.2 (3.11), and 1.0.1 (3.10).
If you must use this exact version, you could try building it from source, using the 1.3.0 tag.

How to add X11 desktop environment in yocto configuration?

I tried to build the BSP for v3msk (linux based embedded system) on Ubuntu 18.04 following the link:
https://elinux.org/R-Car/Boards/Yocto-Gen3-ADAS#Building_the_BSP_for_Renesas_ADAS_boards
I used Yocto v3.21.0
The local.conf I used is available here https://pastebin.com/UyBGzQ2J
I tried adding x11 to distro features.
DISTRO_FEATURES_append = " x11"
I ran
bitbake core-image-x11
and I expect it to build yocto images with X11.
I got error :
ERROR: Nothing PROVIDES 'core-image-x11'
core-image-x11 was skipped: missing required distro feature 'x11' (not in
DISTRO_FEATURES)
What could be missing in local.conf?
Nothing PROVIDES 'core-image-x11'
means you don't have image file with this name in the meta layers list of your build/conf/bblayers.conf file.
try to execute command:
find source| grep images| grep x11
to see if you have any layer containing images related to x11. add discovered layer to the build/conf/bblayers.conf file then retry your command:
bitbake core-image-x11

Simple and fast way to install sass in container ubuntu 14.04?

I want to run a script that compiles my sass in a "build" container.
Since this container will repeatedly be restarted I need a robust and quick way to install or use sass. (including ruby and dependencies)
Is there a simple way for sass to be available in a container for scss compiling?
There is a solution using a ruby container but this is not possible in my case since i already need a specific container image for the build itself.
Using another container and named volumes is also not possible in my case.
You can make a Docker image that inherits the specific Ruby image you need, add your changes to it, and use it instead.
For example, if you currently use debian:jessie, then you can create a new Dockerfile with:
FROM debian:jessie
# Now you install all the dependencies including Ruby... etc
RUN gem install sass
Then you can build and image from that file, and publish that image to a docker registry, and use it instead of the one you are currently using.
Take a look at the source code of some Ruby docker image to see how you can install Ruby and dependencies. Ofcourse, you are free to install it however you want (natively, using rbenv or using rvm... etc).

Using pkg-config with Haskell Stack's Docker integration

I'm trying to build a Haskell Stack project whose extra-deps includes opencv, which in itself depends on OpenCV 3.0 (presently only buildable from source).
I'm following the Docker integration guidelines, and using my own image which builds upon fpco/stack-build:lts-9.20 and installs OpenCV 3.0 (Dockerfile.stack.opencv).
If I build my image I can confirm that opencv is installed and visible to pkg-config:
$ docker build -t stack-opencv -f Dockerfile.stack.opencv .
$ docker run stack-opencv pkg-config --modversion opencv
3.3.1
However if I specify this image in my stack.yml:
docker:
image: stack-opencv
Attempting to stack build yields:
Configuring opencv-0.0.2.0...
setup: The pkg-config package 'opencv' version >=3.0.0 is required but it
could not be found.
I've run the build without the Docker integration, and it completes successfully.
The Dockerfile is passing CMAKE_INSTALL_PREFIX=$HOME/usr.
When running docker build the the root user is used, and thus $HOME is set to /root.
However when doing stack build the stack user is used, they do not have permission to see /root, and thus pkg-config cannot find opencv.
By removing the -D CMAKE_INSTALL_PREFIX=$HOME/usr flag from cmake, the default prefix (/usr/local) is used instead. This is also accessible to the stack user, and thus pkg-config can find it during a stack build.

Spawn a new process in docker container which image is built from scratch

I'm trying to build a minimal docker image (FROM scratch) that contains 2 executable binaries. Both are binaries built with Go. The entrypoint is set to the first one. It takes some data on the image, transforms it using environment variables, starts a new process executing the second binary and pipes the data as an input for the spawned process.
FROM scratch
COPY bin /opt/my-app
ENTRYPOINT ["/opt/my-app/first", "--run", "/opt/my-app/second"]
When I build this image on my Mac, everything works fine. But when it's created it on our build server running linux, the first process cannot start the second one. It fails with an error "fork/exec /opt/my-app/second: no such file or directory". However, "second" binary does exist. In both cases docker engine 1.13.1 is used.
It also works if parent image is changed from scratch to debian:jessie.
Are there any limitations of the scratch image that I'm not aware of?
With a scratch image there will not be a libc (or any shared libs). If it works fine on debian, then I suspect the binary is not statically linked, which is the normal default. Try CGO_ENABLED=0 go build -a -installsuffix cgo as seen here http://www.blang.io/posts/2015-04_golang-alpine-build-golang-binaries-for-alpine-linux/

Resources