Opacity attribute only appears in Docker container - laravel

I have a Laravel app that is using the Metronic theme. As a part of the theme, they have their own implementation of BlockUI. I've been using this for years with no trouble. When the app runs bare-metal, everything works as expected.
However, when I Dockerize the app, everything works fine, but I notice that an extra opacity attribute is being applied to the BlockUI element(s). Not only that, but it's doing it on all of the pages except one.
Here is how it should appear (bare-metal version):
As you can see, it darkens the DataTable and puts up a "Please wait..." box when an AJAX request is made.
Now here's the exact same page, but within a Docker container:
In this case, the "Please wait..." box is only barely visible because it's been given an opacity of about 0.1 and you cannot even tell that the DataTable has been darkened any at all.
How can I track down where this is coming from? It only happens when the exact same app (no changes) is run from within a Docker container and on all pages but one. (The "Orders by Print Type" page works fine. No clue why.)
Here's the Dockerfile, in case it might have something to do with this:
FROM php:apache
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Set our application folder as an environment variable
ENV APP_HOME /var/www/html
# Set working directory
WORKDIR $APP_HOME
# Use the default production configuration
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
# Copy over project-specific PHP settings
COPY ./docker-config/php/local.ini /usr/local/etc/php/conf.d/local.ini
# Get NodeJS
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
# Install all the system dependencies and enable PHP modules
RUN apt-get update && apt-get install -y \
libicu-dev \
libpq-dev \
libmcrypt-dev \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
git \
libzip-dev \
zip \
unzip \
nodejs \
build-essential \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure pdo_mysql \
--with-pdo-mysql=mysqlnd \
&& docker-php-ext-configure gd \
--enable-gd \
--with-freetype=/usr/include/ \
--with-jpeg=/usr/include/ \
&& docker-php-ext-install \
intl \
pcntl \
pdo_mysql \
pdo_pgsql \
pgsql \
zip \
opcache \
gd \
&& pecl install -o -f redis \
&& rm -rf /tmp/pear \
&& docker-php-ext-enable redis
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
# Change uid and gid of apache to docker user uid/gid
RUN usermod -u $uid $user && groupmod -g $uid $user
# Copy existing application directory + permissions
COPY --chown=www-data:www-data . $APP_HOME
# Change the web_root to laravel /var/www/html/public folder
RUN sed -i -e "s/html/html\/public/g" /etc/apache2/sites-enabled/000-default.conf
# Fix the .env file for production.
RUN mv "$APP_HOME/.env.production" "$APP_HOME/.env"
# Enable apache module rewrite
RUN a2enmod rewrite
# Install dependencies
RUN npm install
# Compile CSS & JS
RUN npm run production
# Install all PHP dependencies
RUN composer install --no-interaction
# Create mountpoints and link them.
RUN ln -s /mnt/orders /var/www/html/public/orders
# Run artisan commands to set things up properly
RUN php artisan key:generate
RUN php artisan storage:link
# Optimization for production
RUN composer install --optimize-autoloader --no-dev
RUN php artisan config:cache
RUN php artisan route:cache
RUN php artisan view:cache
# Set the maintainer info metadata
LABEL maintainer="Sturm <email_hidden>"
And here is the relevant portion of the docker-compose.yml file:
# Laravel app (Apache & PHP services with Laravel)
schedule:
build:
args:
user: www-data
uid: 1000
context: .
dockerfile: Dockerfile
image: "sturmb/sky-schedule:2021.6.1"
container_name: schedule
restart: unless-stopped
working_dir: /var/www/html
volumes:
- /mnt/jobs_main:/mnt/jobs_main
- /mnt/orders:/mnt/orders
depends_on:
- schedule-db
ports:
- "8081:80"
- "4543:443"
networks:
- web

There are many moving parts here, therefore it is not trivial to pinpoint exactly where the change in your element is being applied from. One possible way to find this out could be to use MutationObserver and watch for changes being made to the DOM tree. Something along the line of:
var mutationObserver = new MutationObserver(function(mutations) {
mutations.forEach(function(mutation) {
console.log("Detected change: ", mutation);
});
});
var blockElement = document.getElementsByClassName("blockUI blockMsg blockElement");
mutationObserver.observe(blockElement, { attributes : true, attributeFilter : ["style"] });

Related

How to run a cloned Laravel project with sail on Windows?

I cloned an existing Laravel project from git which was dockerized with Sail. As vendor is in the .gitignore, I need to rebuild it before I can use Sail to run my app. Acording to the Laravel doc (https://laravel.com/docs/8.x/sail#installing-composer-dependencies-for-existing-projects) i need to get my dependencies using this command.
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
Problem is both cmd and powershell seem to struggle with the $'s, it seams that they expect an applet name, and I can't manage to run this. What am I missing ?
The error I am getting with PS is
id : The term "id" is not recognized as the name of a cmdlet, function, script file or operable program.
In cmd, i got
docker: Error response from daemon: create $(pwd): "$(pwd)" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.
I also tried with git bash and got
docker: invalid reference format: repository name must be lowercase.
Only execute the command in the WSL2 Distribution (as example, Ubuntu)
First, Open the wsl console from PowerShell pointing the project folder
wsl -d ubuntu
And then execute the docker command for execute Laravel sail
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
I recommend check your php version for available laravel sail compatibility
When using the laravelsail/phpXX-composer image, you should use the same version of PHP that you plan to use for your application (74, 80, or 81).
Font: Laravel Sail

Dockerfile fatal error: jni.h: No such file or directory

I am trying to dockerize a Go application which uses a Go Java JNI library (https://github.com/timob/jnigi) and getting an error on the build stage as follows:
/go/src/github.com/timob/jnigi/cinit.go:8:9: fatal error: jni.h: No such file or directory 8 | #include<jni.h>|^~~~~~~ compilation terminated.
My Dockerfile:
FROM golang:alpine as BUILD
ENV GO111MODULE=auto
RUN apk update && \
apk upgrade && \
apk add git && \
apk add unzip && \
apk add openssl-dev && \
apk add build-base && \
apk add --no-cache gcc musl-dev && \
apk add --no-cache openjdk8-jre
COPY . /go/src/project
WORKDIR /go/src/project
RUN go get -d -v
RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o /go/dist/app
FROM alpine:latest AS FINAL
COPY --from=BUILD /go/dist/app /project-runtime/app
RUN apk update && \
apk add tzdata && \
apk add apr && \
apk add ca-certificates && rm -rf /var/cache/apk/* \
apk add openssl
RUN update-ca-certificates
WORKDIR /project-runtime
ENTRYPOINT ["./app"]
The error happens when the "RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o /go/dist/app" is executed. How should I add the jni.h file? Could you please help me?
I think you are missing to place this script in the JDK root path https://github.com/timob/jnigi/blob/master/compilevars.sh according to these instructions.
The CGO_CFLAGS needs to be set to add the JNI C header files. The compilevars.sh script will do this.
# put this in your build script
source <gopath>/src/tekao.net/jnigi/compilevars.sh <root path of jdk>

Api platform demo project installation problem

I want to try the api-platform demo (link) but when running docker-compose up -d I get this:
Step 25/34 : RUN set -eux; mkdir -p var/cache var/log; composer dump-autoload --classmap-authoritative --no-dev; composer dump-env prod; composer run-script --no-dev post-install-cmd; chmod +x bin/console; sync
---> Running in ce481c894af3
+ mkdir -p var/cache var/log
+ composer dump-autoload --classmap-authoritative --no-dev
Generating optimized autoload files (authoritative)
composer/package-versions-deprecated: Generating version class...
composer/package-versions-deprecated: ...done generating version class
Generated optimized autoload files (authoritative) containing 4186 classes
+ composer dump-env prod
[RuntimeException]
Please run "composer require symfony/dotenv" to load the ".env" files configuring the application.
symfony:dump-env [--empty] [--] [<env>]
The command '/bin/sh -c set -eux; mkdir -p var/cache var/log; composer dump-autoload --classmap-authoritative --no-dev; composer dump-env prod; composer run-script --no-dev post-install-cmd; chmod +x bin/console; sync' returned a non-zero code: 1
ERROR: Service 'php' failed to build
OS: Ubuntu 18.04
Docker: version 20.10.3, build 48d30b5
Docker-compose: version 1.28.4, build cabd5cfb
What am I doing wrong?
there is a pull request to fix it. just update the composer.lock file https://github.com/api-platform/api-platform/pull/1826

Can we concatenate several RUN instructions with --no-cache in Dockerfile

I was writing a Dockerfile and i have concatenated several RUN instructions into one for proper caching but i realised one of the RUN instruction having --no-cache. Could you please advise how the caching will work here.
RUN go mod download \
&& apk update --no-cache \
&& apk add git \
&& CGO_ENABLED=0 go build -o golang-sdk .
The apk update --no-cache does not make sense. Strike it and modify the git install to
RUN apk add git --no-cache \
&& go mod download \
&& CGO_ENABLED=0 go build -o golang-sdk .
Even better: do a two stage build:
FROM golang:latest AS build
WORKDIR /go/src/github.com/you/project/
RUN [yourstuff]
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /usr/local/bin
COPY --from=build /go/src/github.com/you/project/app .
CMD ["/usr/local/bin/app"]
This way, you can do all the stuff you like while building without needing to think about image sizes, and have the smallest possible image for app.

reduce docker build time for react static app

I trying to reduce the time take to build a docker image to react app,
the react should be static without server rendering.
now it takes around 5-10 minute to create an image and the image size on the local machine is around 1.5GB !!, the issue is that also after the second time of image creation, even I changed smth in the code it doesn't use any cache
I am looking for a solution to cut the time the size and here is my docker File after lot of changes
# Producation and dev build
FROM node:14.2.0-alpine3.10 AS test1
RUN apk update
RUN apk add \
build-base \
libtool \
autoconf \
automake \
jq \
openssh \
libexecinfo-dev
ADD package.json package-lock.json /app/
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
ADD . /app/
RUN rm -rf node_modules
RUN npm install --production
# copy production node_modules aside, to prevent collecting them
RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
RUN npm install
RUN npm install react-scripts#3.4.1 -g --silent
RUN npm run build
RUN rm -rf node_modules
RUN cp -R prod_node_modules node_modules
#FROM node:13.14.0
FROM test1
# copy app sources
COPY --from=test1 /app/build .
COPY --from=test1 /app/env-config.js .
# serve is what we use to run the web application
RUN npm install -g serve
# remove the sources & other needless stuff
RUN rm -rf ./src
RUN rm -rf ./prod_node_modules
# Add bash
RUN apk add --no-cache bash
CMD ["/bin/bash", "-c", "serve -s build"]
You're hitting two basic dynamics here. The first is that your image contains a pretty large amount of build-time content, including at least some parts of a C toolchain; since your run-time "stage" is built FROM the build stage as is, it brings all of the build toolchain along with it. The second is that each RUN command produces a new Docker layer with differences from the previous layer, so RUN commands only make the container larger. More specifically RUN rm -rf ... makes the image slightly larger and does not result in space savings.
You can use a multi-stage build to improve this. Each FROM line causes docker build to start over from some specified base image, and you can COPY --from=... previous build stages. I'd do this in two stages, a first stage that builds the application and a second stage that runs it.
# Build stage:
FROM node:14.2.0-alpine3.10 AS build
# Install OS-level dependencies (including C toolchain)
RUN apk update \
&& apk add \
build-base \
libtool \
autoconf \
automake \
jq \
openssh \
libexecinfo-dev
# set working directory
WORKDIR /app
# install app dependencies
# (copy _just_ the package.json here so Docker layer caching works)
COPY package.json package-lock.json ./
RUN npm install
# build the application
COPY . ./
RUN npm run build
# Final stage:
FROM node:14.2.0-alpine3.10
# set working directory
WORKDIR /app
# install dependencies
COPY package.json package-lock.json ./
RUN npm install --production
# get the build tree
COPY --from=build /app/build/ ./build/
# explain how to run the application
ENTRYPOINT ["npx"]
CMD ["serve", "-g", "build"]
Note that when we get to the second stage, we run npm install --production on a clean Node installation; we don't try to shuffle back and forth between dev and prod dependencies. Rather than trying to RUN rm -rf src, we just don't COPY it into the final image.
This also requires making sure you have a .dockerignore file that contains node_modules (which will reduce build times and avoid some potential conflicts; RUN npm install will recreate it in the directory). If you need react-scripts or serve those should be listed in your package.json file.

Resources