`npm start` in docker ends with: Please install a supported C++11 compiler and reinstall the module - c++11

I got a issue while running a docket container, getting error:
error: uncaughtException: Compilation of µWebSockets has failed and there is no pre-compiled binary available for your system. Please install a supported C++11 compiler and reinstall the module 'uws'.
Here is a full stack trace: http://pastebin.com/qV0hzRxL
This is my Dockerfile:
FROM node:6.7-slim
# ----- I added this, but it didn't help
RUN apt-get update && apt-get install -y gcc g++
RUN gcc --version
RUN g++ --version
# ------------------------------------
WORKDIR /usr/src/app
ENV NODE_ENV docker
RUN npm install
CMD ["npm", "start"]
Then I build successfully it with: sudo docker-compose build --no-cache chat-ws (chat-ws is name of image)
And sudo docker-compose up chat-ws ends with error.
Note: Docker image is part of composition in docker-compose.
EDIT: Parts of docker-compose.yml
chat-ws:
build: ./dockerfiles/chat-ws
links:
- redis
- chat-api
ports:
- 3000:3000
volumes_from:
- data_chat-ws
And:
data_chat-ws:
image: node:6.7-slim
volumes:
- ${PATH_CHAT_WS}:/usr/src/app
command: "true"
Any ideas? Please?
Thanks, Peter

To me, the problem was my darwin version was 57 (OSX 10.12.6)
npm install uws # to me was uws#8.14.1, thit has my darwin version
now copy the compiled version to your OS
cp node_modules/uws/uws_darwin_57.node node_modules/socketcluster-server/node_modules/uws/

It's late, and I only really learned about docker-compose and Stack Overflow just today, so forgive me if I am making some really obvious mistakes here, but:
Have you tried running the image you made? docker run yourimage, including whichever command. Does that run fine?
In your docker-compose, why does your data_chat-ws service have as an image node:6.7-slim? Shouldn't that be the image you built, i.e. image: chat-ws?
This is where I would start debugging: first make sure you are actually able to run the image, leaving aside all docker-compose stuff, and only when it runs fine, add it to your docker-compose.yml. To help checking whether you can actually run your container, check the example in their official documentation here (scroll down to the 'How to use this image' part of the page).

Related

No such file or directory when executing command via docker run -it

I have this Dockerfile (steps based on installation guide from AWS)
FROM amazon/aws-cli:latest
RUN yum install python37 -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py --user
RUN pip3 install awsebcli --upgrade --user
RUN echo 'export PATH=~/.local/bin:$PATH' >> ~/.bashrc
RUN source ~/.bashrc
ENTRYPOINT ["/bin/bash"]
When I build the image with docker build -t eb-cli . and then run eb --version inside container docker run -it eb-cli, everything works
bash-4.2# eb --version
EB CLI 3.20.3 (Python 3.7.1)
But, when I run the command directly as docker run -it eb-cli eb --version, it gives me this error
/bin/bash: eb: No such file or directory
I think that is problem with bash profiles, but I can't figure it out.
Your sourced .bashrc would stay in the layer it was sourced, but won't apply to the resulting container. This is actually more thoroughly explained in this answer:
Each command runs a separate sub-shell, so the environment variables are not preserved and .bashrc is not sourced
Source: https://stackoverflow.com/a/55213158/2123530
A solution for you would be to set the PATH in an environment variable of the container, rather, and blank the ENTRYPOINT as set by your base image.
So you could end with an image as simple as:
FROM amazon/aws-cli:latest
ENV PATH="/root/.local/bin:${PATH}"
RUN yum install python37 -y \
&& pip3 install awsebcli
ENTRYPOINT []
With this Dockerfile, here is the resulting build and run:
$ docker build . -t eb-cli -q
sha256:49c376d98fc2b35cf121b43dbaa96caf9e775b0cd236c1b76932e25c60b231bc
$ docker run eb-cli eb --version
EB CLI 3.20.3 (Python 3.7.1)
Notes:
you can install the really latest version of pip, as you did it, but it is not needed as it is already bundled in the package python37
installing packages for the user, with the --user flag, is a good practice indeed, but since you are running this command as root, there is no real point in doing so, in the end
having the --upgrade flag does not makes much more sense, here, as the package won't be installed beforehand. And upgrading the package would be as simple as rebuilding the image
reducing the number of layer of an image by reducing the number of RUN in your Dockerfile is an advisable practice that you can find in the best practice

Dockerfile not working - running command manually works, but same command through run or entrypoint doesn't work

My Dockerfile won't run my entrypoint automatically.
Dockerfile:
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
RUN apt update
RUN apt --yes --force-yes install libssl1.1
RUN apt --yes --force-yes install libpulse0
RUN apt --yes --force-yes install libasound2
RUN apt --yes --force-yes install libicu63
RUN apt --yes --force-yes install libpcre2-16-0
RUN apt --yes --force-yes install libdouble-conversion1
RUN apt --yes --force-yes install libglib2.0-0
RUN apt --yes --force-yes install telnet
RUN apt --yes --force-yes install pulseaudio
RUN apt --yes --force-yes install libasound2-dev
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["SDKStandalone/SDKStandalone.csproj", "SDKStandalone/"]
RUN dotnet restore "SDKStandalone/SDKStandalone.csproj"
COPY . .
WORKDIR "/src/SDKStandalone"
RUN dotnet build "SDKStandalone.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "SDKStandalone.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
RUN chmod +x /app/SDKContainer/startup.sh
ENTRYPOINT ["/app/SDKContainer/startup.sh"]
What also doesn't work is if I change the last line to:
ENTRYPOINT ["/bin/bash", "/app/SDKContainer/mySDK"]
My Startup file contains:
#!/bin/bash
/app/SDKContainer/mySDK &
What does work, is if I open bash from the running container, and do either:
chmod +x /app/SDKContainer/startup.sh
/app/SDKContainer/startup.sh
Or simply
/app/SDKContainer/mySDK
Both of those work fine, but I need my SDK to run automatically on container start and I do not want to start it manually. I don't know if it matters, but for completeness - I am debugging in Visual Studio 2019, they are running through a Docker compose YML, and I have selected 'do not debug'.
Docker compose
version: '3.4'
services:
myproject.server:
image: ${DOCKER_REGISTRY-}myserver
build:
context: .
dockerfile: Server/Dockerfile
sdkstandalone:
image: ${DOCKER_REGISTRY-}sdkstandalone
container_name: sdk1
build:
context: .
dockerfile: SDKStandalone/Dockerfile
sdkstandalone2:
image: ${DOCKER_REGISTRY-}sdkstandalone
container_name: sdk2
build:
context: .
dockerfile: SDKStandalone/Dockerfile
launchSettings.json
{
"profiles": {
"Docker Compose": {
"commandName": "DockerCompose",
"serviceActions": {
"sdkstandalone": "StartWithoutDebugging",
"myproject.server": "StartDebugging",
"sdkstandalone2": "StartWithoutDebugging"
},
"commandVersion": "1.0"
}
}
}
The container exits when the entry point process terminates. You have ensured that it terminates immediately. Take out the & to run the process in the foreground instead; this will keep your Docker image alive until the job finishes. This is a very common Docker FAQ.
Unless your parent image was specifically designed this way, you should probably use CMD, not ENTRYPOINT.
As a further aside, apt can install multiple packages in one go. Your long list of RUN commands near the beginning of your Dockerfile can be reduced to just two commands, and run significantly quicker.
The issue was in Visual Studio debugging itself. When it runs without debugging it doesn't work, but running the docker-compose directly from my commandline without Visual Studio works absolutely fine. I will mark this is the correct answer since it solved my issue, but upvoted #triplee for good advice and best practices
This problem was killing me. After passed some hours looking for some resolution, I found this forum Debugging docker compose. VS can't attach to containers
In my case, I updated my VS and there's this problem with Docker Compose v2. They're going to release the fix soon.
For now, disable the version 2, restart Docker and VS. It worked for me.
Command to check current version: docker-compose --version
Commando to come back to the previous version: docker-compose disable-v2
Hope it helps anyone with the similar issue.

How do I get LaraDock to use yum instead of apt-get?

I am trying to setup a container using laradock with the following command:
docker-compose up -d nginx mysql
The problem is I am getting the following error:
E: There were unauthenticated packages and -y was used without --allow-unauthenticated
ERROR: Service 'workspace' failed to build: The command '/bin/sh -c apt-get update -yqq && apt-get -yqq install nasm' returned a non-zero code: 100`
Is there a way to get it to use yum instead of apt-get?
(I'm a server noob, thought docker would be easy and it seems that it is. Just can't figure out why it's trying to use apt-get instead of yum. Thanks.)
I suggest to read about the problems with different package system: Getting apt-get on an alpine container
Most official docker images are available with different version of Linux (alpine, debian, cent). I would rather create a own Dockerfile and change "FROM x:y" than use different package systems.
But, read the linked comment.

Docker hangs on the RUN command OSX

I have been learning docker using the docker docs, and first encountered a problem in the quick start guide to Django. The program would build normally up until the the second to last command. Here is my dockerfile and the output:
FROM python:3.5.2
ENV PYTHONUNBUFFERED 1
WORKDIR /code
ADD requirements.txt /code
RUN pip3 install -r requirements.txt
ADD . /code
Then when I run:
docker-compose run web django-admin startproject src .
I get the whole thing built and then it hangs:
Installing collected packages: Django, psycopg2
Running setup.py install for psycopg2: started
Running setup.py install for psycopg2: finished with status 'done'
Successfully installed Django-1.10.5 psycopg2-2.6.2
So since I don't have experience with compose I tried the most basic docker build that included a dockerfile. This one also got hung up.
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
And this is the last terminal line before the hang.
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
According to the tutorial this occurs in the same spot. The second to last command, which happens to also be RUN.
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
---> dfaf993d4a2e
Removing intermediate container 05d4eda04526
[END RUN COMMAND]
So thats why I think its in the RUN command, but I'm not sure why or how to fix it. Does anyone know why this might be occurring? I am using a Macbook Pro 8.1 running OSX EL CAPITIAN version 10.11.6 and Docker version 1.13.1, build 092cba3

Go-compiled binary won't run in an alpine docker container on Ubuntu host

Given a binary, compiled with Go using GOOS=linux and GOARCH=amd64, deployed to a docker container based on alpine:3.3, the binary will not run if the docker engine host is Ubuntu (15.10):
sh: /bin/artisan: not found
This same binary (compiled for the same OS and arch) will run just fine if the docker engine host is busybox (which is the base for alpine) deployed within a VirtualBox VM on Mac OS X.
This same binary will also run perfectly fine if the container is based on one of Ubuntu images.
Any idea what this binary is missing?
This is what I've done to reproduce (successful run in VirtualBox/busybox on OS X not shown):
Build (building explicitly with flags even though the arch matches):
➜ artisan git:(master) ✗ GOOS=linux GOARCH=amd64 go build
Check it can run on the host:
➜ artisan git:(master) ✗ ./artisan
10:14:04.925 [ERROR] artisan: need a command, one of server, provision or build
Copy to docker dir, build, run:
➜ artisan git:(master) ✗ cp artisan docker/build/bin/
➜ artisan git:(master) ✗ cd docker
➜ docker git:(master) ✗ cat Dockerfile
FROM docker:1.10
COPY build/ /
➜ docker git:(master) ✗ docker build -t artisan .
Sending build context to Docker daemon 10.15 MB
Step 1 : FROM docker:1.10
...
➜ docker git:(master) ✗ docker run -it artisan sh
/ # /bin/artisan
sh: /bin/artisan: not found
Now changing the image base to phusion/baseimage:
➜ docker git:(master) ✗ cat Dockerfile
#FROM docker:1.10
FROM phusion/baseimage
COPY build/ /
➜ docker git:(master) ✗ docker build -t artisan .
Sending build context to Docker daemon 10.15 MB
Step 1 : FROM phusion/baseimage
...
➜ docker git:(master) ✗ docker run -it artisan sh
# /bin/artisan
08:16:39.424 [ERROR] artisan: need a command, one of server, provision or build
By default, if using the net package a build will likely produce a binary with some dynamic linking, e.g. to libc. You can inspect dynamically vs. statically link by viewing the result of ldd output.bin
There are two solutions I've come across:
Disable CGO, via CGO_ENABLED=0
Force the use of the Go implementation of net dependencies, netgo via go build -tags netgo -a -v, this is implemented for a certain platforms
From https://golang.org/doc/go1.2:
The net package requires cgo by default because the host operating system must in general mediate network call setup. On some systems, though, it is possible to use the network without cgo, and useful to do so, for instance to avoid dynamic linking. The new build tag netgo (off by default) allows the construction of a net package in pure Go on those systems where it is possible.
The above assumes that the only CGO dependency is the standard library's net package.
I had the same issue with a go binary, and I got it to work after adding this to my docker file:
RUN apk add --no-cache libc6-compat
Go compiler from your build machine probably links your binary with libraries on different location than in Alpine. In my case it was compiled with dependencies under /lib64 but Alpine does not use that folder.
FROM alpine:edge AS build
RUN apk update
RUN apk upgrade
RUN apk add --update go=1.8.3-r0 gcc=6.3.0-r4 g++=6.3.0-r4
WORKDIR /app
ENV GOPATH /app
ADD src /app/src
RUN go get server # server is name of our application
RUN CGO_ENABLED=1 GOOS=linux go install -a server
FROM alpine:edge
WORKDIR /app
RUN cd /app
COPY --from=build /app/bin/server /app/bin/server
CMD ["bin/server"]
I'm working on article about this issue. You can find draft with this solution here http://kefblog.com/2017-07-04/Golang-ang-docker .
What did the trick for me was enabling static linking in the linker options:
$ go build -ldflags '-linkmode external -w -extldflags "-static"'
The -linkmode option tells Go to use the external linker, the -extldflags option sets options to pass to the linker and the -w flag disables DWARF debug info to improve binary size.
See go tool link and Statically compiled Go programs, always, even with cgo, using musl
for more details
I had an app that required CGO_ENABLED=1.
The fix for me to run the compiled go binary in a debian-slim container was to build the binary using RUN GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -o goapp
And run the following commands in the debian slim
RUN apt-get update && apt-get install -y musl-dev
RUN ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1
Made me able to run the goapp afterwards
TIP: ldd goapp showed that libc.musl-x86_64 was missing in the container.
While executing a go binary inside a debian docker container, faced this issue:
/bin/bash: line 10: /my/go/binary: No such file or directory
The binary was built by using docker-in-docker (dind) from an alpine container using command:
GOOS=linux GOARCH=amd64 go build
Fixed it by using following env while building the binary:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build

Resources