Unable to install package libc6-compat on windows - windows

I am building a docker image and it will not install libc6-compat on windows docker desktop. This is the full error:
#8 [deps 2/6] RUN apk add --no-cache libc6-compat
#8 sha256:190a4effb95700083113c7ec8bd34c90330cc8b70a393b16624e72fee8f0523d
#8 0.373 fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/main/x86_64/APKINDEX.tar.gz
#8 0.547 48EB05913A7F0000:error:0A000086:SSL routines:tls_post_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1889:
#8 0.550 fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/community/x86_64/APKINDEX.tar.gz
#8 0.550 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.17/main: Permission denied
#8 0.657 48EB05913A7F0000:error:0A000086:SSL routines:tls_post_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1889:
#8 0.659 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.17/community: Permission denied
#8 0.659 ERROR: unable to select packages:
#8 0.661 libc6-compat (no such package):
#8 0.661 required by: world[libc6-compat]
#8 ERROR: executor failed running [/bin/sh -c apk add --no-cache libc6-compat]: exit code: 1
------
> [deps 2/6] RUN apk add --no-cache libc6-compat:
------
executor failed running [/bin/sh -c apk add --no-cache libc6-compat]: exit code: 1
Here is the Dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
RUN apk add git
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* .npmrc ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm set-script prepare '' && npm ci --legacy-peer-deps; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]

Related

How to build a un-minified version of Shopware 6 administration scripts for debugging purpose?

I would like to debug the following not-very-helpful error message when trying to save a custom CMS component in Shopware 6 administration layout editor:
An error was captured in current module: undefined
errorCaptured # app.js?16504701226454027:1
Xe # vendors-node.js?16504701221582962:2
(anonymous) # vendors-node.js?16504701221582962:2
Promise.catch (async)
Ge # vendors-node.js?16504701221582962:2
n # vendors-node.js?16504701221582962:2
o._wrapper # vendors-node.js?16504701221582962:2
When having such problems in the store front, I usually modify build-storefront.sh to call
npm --prefix "${STOREFRONT_ROOT}"/Resources/app/storefront run development
instead of
npm --prefix "${STOREFRONT_ROOT}"/Resources/app/storefront run production
Now I want to adapt build-administration.sh in a similar way:
The original line is
(cd "${ADMIN_ROOT}"/Resources/app/administration && npm clean-install && npm run build)
When I change it to
(cd "${ADMIN_ROOT}"/Resources/app/administration && npm install && mode=development npm run dev)
I would call
"dev": "mode=development webpack-dev-server",
vendor/shopware/administration/Resources/app/administration/package.json
I do not want to use the dev server and changed this to
"dev": "mode=development webpack",
No the build runs, but I still get a minified app.js as a result.
How can I debug the initial problem further?

Using Yarn 2 (Berry) for packaging application in a Docker image

I'm migrating a VueJS application from "classic" Yarn 1.x to Yarn 2. Following the install documentation is straightforward and works without problems.
The tricky part comes when packaging the application in a Docker image.
Current Dockerfile
FROM node:14-alpine AS build-stage
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build --modern \
&& find dist -type f -exec gzip -k "{}" \;
FROM nginx:mainline-alpine as production-stage
RUN apk add --no-cache curl
HEALTHCHECK CMD curl -f http://localhost || exit 1
COPY docker/entrypoint.sh /
RUN chmod +x /entrypoint.sh
COPY docker/app.nginx /etc/nginx/conf.d/default.conf
COPY --from=build-stage /app/dist /usr/share/nginx/html
ENTRYPOINT [ "/entrypoint.sh" ]
Maybe I looked in the wrong places but I couldn't find any information how a Yarn 2 Zero-Install setup would look like for a Docker image.
Do you have any recommendation on how to use the Yarn 2 approach in a Dockerfile?
#Ethan's answer makes sense, and it should work. But for me I was getting this strange error during a build:
> [ 7/10] RUN yarn --version:
#11 1.430 internal/modules/cjs/loader.js:905
#11 1.430 throw err;
#11 1.430 ^
#11 1.430
#11 1.430 Error: Cannot find module '/base/.yarn/releases/yarn-3.1.1.cjs'
#11 1.430 at Function.Module._resolveFilename (internal/modules/cjs/loader.js:902:15)
#11 1.430 at Function.Module._load (internal/modules/cjs/loader.js:746:27)
#11 1.430 at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:76:12)
#11 1.430 at internal/main/run_main_module.js:17:47 {
#11 1.430 code: 'MODULE_NOT_FOUND',
#11 1.430 requireStack: []
#11 1.430 }
Even though I definitely copied .yarn to the image, how well.
I had to actually install yarn v2 within the build:
FROM node:14.17.1 as build
WORKDIR /base
COPY package.json .
RUN yarn set version berry
RUN yarn install --frozen-lockfile
UPDATE
Turns out Docker doesn't copy entire directories the way I thought it did.
I had to add an explict COPY for the .yarn:
COPY .yarn ./.yarn
Solved it for me.
Due to a weird catch-22 with yarn 2's package install process, I've found this to be the most effective method of installing yarn#berry with docker. There's likely a better method of doing it, but I'm not aware of one.
FROM node:latest as build
WORKDIR /app
# copy only the package.json file so yarn set version can
# correctly download its modules for berry without overwriting
# the existing yarnrc and cache files. If the rc is added now,
# yarn will attempt to use the berry module without it being
# installed.
COPY package.json .
RUN yarn set version berry
# and _now_ pull in the rest of the build files overriding
# the rc generated by setting the yarn version
COPY yarn.lock .yarn .yarnrc.yml ./
RUN yarn install
COPY . .
# continue with your build process
However, I will note that yarn is intended to run from the local .yarn/releases folder, so the best method may simply be to install yarn2 in local and add it to the repo as yarn recommends. Then as a preliminary step with pulling in the package.json file, pull the necessary .yarn files with it as shown above. This should work under most circumstances, however it gave me difficulty sometimes, hence the above example.
FROM node:latest as build
WORKDIR /app
# Copy in the package file as well as other yarn
# dependencies in the local directory, assuming the
# yarn berry release module is inside .yarn/releases
# already
COPY package.json yarn.lock .yarn .yarnrc.yml ./
RUN yarn install
COPY . .
# continue with your build process

ERROR: unsatisfiable constraints: so:libvpx.so.6 (missing)

FFMpeg was updated this week and is causing the build to break.
Are my options to either:
pin ffmpeg to a previous version? If so, how do I pin to the version before current?
Update the python dockerfile version
Output:
Step 8/42 : RUN apk add --no-cache ffmpeg
---> Running in 9e46540ed393
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
fetch http://dl-8.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
so:libvpx.so.6 (missing):
required by:
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
ffmpeg-libs-4.1.1-r2[so:libvpx.so.6]
The command '/bin/sh -c apk add --no-cache ffmpeg' returned a non-zero code: 3
ERROR: Job failed: exit code 3
FATAL: exit code 3
Dockerfile:
FROM python:3.6.7-alpine
ENV LANG C.UTF-8
RUN echo "http://dl-8.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN apk upgrade
RUN addgroup -S django && adduser -S -G django django
RUN apk update
# FFMPEG/Sox dependencies
RUN apk add sox
#RUN apk add --no-cache libvpx-dev
RUN apk add --no-cache ffmpeg
You are only adding the community edge repository, not main. This leads to some inconsistencies for apk.
It works if you change your Dockerfile:4 to the following:
RUN echo -e "http://dl-cdn.alpinelinux.org/alpine/edge/community\nhttp://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories

How to freeze micro version with dependencies?

I want to build a docker image with a fixed version of micro and go dependencies. I plan to do it with dep:
git checkout git#github.com:micro/micro.git
dep ensure
git add Gopkg.toml
git add Gopkg.lock
# Build micro
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-w' -i -o micro ./main.go
# Build docker image
...
So, my question is does it the best solution to build consistent micro docker image?
An example of a Dockerfile can be:
FROM golang:1.9-alpine3.6 as builder
# Install package manager
RUN apk add --no-cache --virtual .go-dependencies git curl \
&& curl https://glide.sh/get | sh
# Copy files from context
WORKDIR /go/src/github.com/foo/bar
COPY . .
# Install project dependencies, test and build
RUN glide install \
&& go test ./... \
&& CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-w' -i -o ./entry ./main.go ./plugins.go
# Build final image with binary
FROM alpine:3.6
RUN apk add --update ca-certificates && \
rm -rf /var/cache/apk/* /tmp/*
WORKDIR /
COPY --from=builder /go/src/github.com/foo/bar/entry .
ENTRYPOINT [ "/entry" ]
And the glide.yaml would look like this:
package: .
import:
- package: github.com/micro/go-micro
version: ^0.3.0
subpackages:
- client
- server
- package: github.com/micro/go-plugins
version: ^0.6.1
subpackages:
- wrapper/trace/opentracing
- broker/nats
- transport/nats
- package: github.com/opentracing/opentracing-go
version: ^1
- package: github.com/openzipkin/zipkin-go-opentracing
version: ^0.3
testImport:
- package: github.com/golang/mock
subpackages:
- gomock
- package: github.com/smartystreets/goconvey
subpackages:
- convey
In my case, dep looks great and fast enough, moreover, it's official dependency manager in go so I think it's a right choice.

GitLab CI - Cache not working

I'm currently using GitLab in combination with CI runners to run unit tests of my project, to speed up the process of bootstrapping the tests I'm using the built-in cache functionality, however this doesn't seem to work.
Each time someone commits to master, my runner does a git fetch and proceeds to remove all cached files, which means I have to stare at my screen for around 10 minutes to wait for a test to complete while the runner re-downloads all dependencies (NPM and PIP being the biggest time killers).
Output of the CI runner:
Fetching changes...
Removing bower_modules/jquery/ --+-- Shouldn't happen!
Removing bower_modules/tether/ |
Removing node_modules/ |
Removing vendor/ --'
HEAD is now at 7c513dd Update .gitlab-ci.yml
Currently my .gitlab-ci.yml
image: python:latest
services:
- redis:latest
- node:latest
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- ~/.cache/pip/
- vendor/
- node_modules/
- bower_components/
before_script:
- python -V
# Still gets executed even though node is listed as a service??
- '(which nodejs && which npm) || (apt-get update -q && apt-get -o dir::cache::archives="vendor/apt/" install nodejs npm -yqq)'
- npm install -g bower gulp
# Following statements ignore cache!
- pip install -r requirements.txt
- npm install --only=dev
- bower install --allow-root
- gulp build
test:
variables:
DEBUG: "1"
script:
- python -m unittest myproject
I've tried reading the following articles for help however none of them seem to fix my problem:
http://docs.gitlab.com/ce/ci/yaml/README.html#cache
https://fleschenberg.net/gitlab-pip-cache/
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/336
Turns out that I was doing some things wrong:
Your script can't cache files outside of your project scope, creating a virtual environment instead and caching that allows you to cache your pip modules.
Most important of all: Your test must succeed in order for it to cache the files.
After using the following config I got a -3 minute time difference:
Currently my configuration looks like follows and works for me.
# Official framework image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python
image: python:latest
# Pick zero or more services to be used on all builds.
# Only needed when using a docker container to run your tests in.
# Check out: http://docs.gitlab.com/ce/ci/docker/using_docker_images.html#what-is-service
services:
- mysql:latest
- redis:latest
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- venv/
- node_modules/
- bower_components/
# This is a basic example for a gem or script which doesn't use
# services such as redis or postgres
before_script:
# Check python installation
- python -V
# Install NodeJS (Gulp & Bower)
# Default repository is outdated, this is the latest version
- 'curl -sL https://deb.nodesource.com/setup_8.x | bash -'
- apt-get install -y nodejs
- npm install -g bower gulp
# Install dependencie
- pip install -U pip setuptools
- pip install virtualenv
test:
# Indicate to the framework that it's being unit tested
variables:
DEBUG: "1"
# Test script
script:
# Set up virtual environment
- virtualenv venv -ppython3
- source venv/bin/activate
- pip install coverage
- pip install -r requirements.txt
# Install NodeJS & Bower + Compile JS
- npm install --only=dev
- bower install --allow-root
- gulp build
# Run all unit tests
- coverage run -m unittest project.tests
- coverage report -m project/**/*.py
Which resulted in the following output:
Fetching changes...
Removing .coverage --+-- Don't worry about this
Removing bower_components/ |
Removing node_modules/ |
Removing venv/ --`
HEAD is now at 24e7618 Fix for issue #16
From https://git.example.com/repo
85f2f9b..42ba753 master -> origin/master
Checking out 42ba7537 as master...
Skipping Git submodules setup
Checking cache for master... --+-- The files are back now :)
Successfully extracted cache --`
...
project/module/script.py 157 9 94% 182, 231-244
---------------------------------------------------------------------------
TOTAL 1084 328 70%
Creating cache master...
Created cache
Uploading artifacts...
venv/: found 9859 matching files
node_modules/: found 7070 matching files
bower_components/: found 982 matching files
Trying to load /builds/repo.tmp/CI_SERVER_TLS_CA_FILE ...
Dialing: tcp git.example.com:443 ...
Uploading artifacts to coordinator... ok id=127 responseStatus=201 Created token=XXXXXX
Job succeeded
For the coverage report, I used the following regular expression:
^TOTAL\s+(?:\d+\s+){2}(\d{1,3}%)$

Resources