Docker: File (bootstrap.sh) not found in docker image while running container. Although file is present in image - image

I have created an image to run docker container with chrome. Below is my code. My dockerfile does compile into image. But whenever I try to run container from image I get error "Bootstrap.sh file not found" Although file is present in my FileSystem snapshot inside image. You can check screenshot.
Please help me resolve this issue I am new to docker.
FROM ubuntu:16.04
RUN apt-get update && apt-get clean && apt-get install -y \
x11vnc \
xvfb \
fluxbox \
wmctrl \
wget \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list \
&& apt-get update && apt-get -y install google-chrome-stable
RUN useradd apps \
&& mkdir -p /home/apps \
&& chown -v -R apps:apps /home/apps
COPY bootstrap.sh /
CMD '/bootstrap.sh'
BootStrap.sh file code:
#!/bin/bash
# Based on: http://www.richud.com/wiki/Ubuntu_Fluxbox_GUI_with_x11vnc_and_Xvfb
main() {
log_i "Starting xvfb virtual display..."
launch_xvfb
log_i "Starting window manager..."
launch_window_manager
log_i "Starting VNC server..."
run_vnc_server
}
launch_xvfb() {
local xvfbLockFilePath="/tmp/.X1-lock"
if [ -f "${xvfbLockFilePath}" ]
then
log_i "Removing xvfb lock file '${xvfbLockFilePath}'..."
if ! rm -v "${xvfbLockFilePath}"
then
log_e "Failed to remove xvfb lock file"
exit 1
fi
fi
# Set defaults if the user did not specify envs.
export DISPLAY=${XVFB_DISPLAY:-:1}
local screen=${XVFB_SCREEN:-0}
local resolution=${XVFB_RESOLUTION:-1280x960x24}
local timeout=${XVFB_TIMEOUT:-5}
# Start and wait for either Xvfb to be fully up or we hit the timeout.
Xvfb ${DISPLAY} -screen ${screen} ${resolution} &
local loopCount=0
until xdpyinfo -display ${DISPLAY} > /dev/null 2>&1
do
loopCount=$((loopCount+1))
sleep 1
if [ ${loopCount} -gt ${timeout} ]
then
log_e "xvfb failed to start"
exit 1
fi
done
}
launch_window_manager() {
local timeout=${XVFB_TIMEOUT:-5}
# Start and wait for either fluxbox to be fully up or we hit the timeout.
fluxbox &
local loopCount=0
until wmctrl -m > /dev/null 2>&1
do
loopCount=$((loopCount+1))
sleep 1
if [ ${loopCount} -gt ${timeout} ]
then
log_e "fluxbox failed to start"
exit 1
fi
done
}
run_vnc_server() {
local passwordArgument='-nopw'
if [ -n "${VNC_SERVER_PASSWORD}" ]
then
local passwordFilePath="${HOME}/.x11vnc.pass"
if ! x11vnc -storepasswd "${VNC_SERVER_PASSWORD}" "${passwordFilePath}"
then
log_e "Failed to store x11vnc password"
exit 1
fi
passwordArgument=-"-rfbauth ${passwordFilePath}"
log_i "The VNC server will ask for a password"
else
log_w "The VNC server will NOT ask for a password"
fi
x11vnc -display ${DISPLAY} -forever ${passwordArgument} &
wait $!
}
log_i() {
log "[INFO] ${#}"
}
log_w() {
log "[WARN] ${#}"
}
log_e() {
log "[ERROR] ${#}"
}
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ${#}"
}
control_c() {
echo ""
exit
}
trap control_c SIGINT SIGTERM SIGHUP
main
exit
Snapshot of Error:
Proof bootstrap.sh file is present inside my docker image

What you need to do is give the file the correct permissions. In your Dockerfile if you can add the line RUN chmod +x /bootstrap.sh right before you run CMD.
Dockerfile
FROM ubuntu:16.04
COPY bootstrap.sh /
RUN apt-get update && apt-get clean && apt-get install -y \
x11vnc \
xvfb \
fluxbox \
wmctrl \
wget \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list \
&& apt-get update && apt-get -y install google-chrome-stable
RUN useradd apps \
&& mkdir -p /home/apps \
&& chown -v -R apps:apps /home/apps
RUN chmod +x /bootstrap.sh
CMD '/bootstrap.sh'

You can try this:
CMD sh /bootstrap.sh
DockerFile
FROM ubuntu:16.04
COPY bootstrap.sh /
RUN apt-get update && apt-get clean && apt-get install -y \
x11vnc \
xvfb \
fluxbox \
wmctrl \
wget \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list \
&& apt-get update && apt-get -y install google-chrome-stable
RUN useradd apps \
&& mkdir -p /home/apps \
&& chown -v -R apps:apps /home/apps
CMD sh /bootstrap.sh

Related

Laravel Sail can't create new log files: permission denied

I'm using Laravel Sail with this Dockerfile content
FROM ubuntu:22.04
LABEL maintainer="Taylor Otwell"
ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN apt-get update \
&& apt-get install -y cron
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.1-cli php8.1-dev \
php8.1-pgsql php8.1-sqlite3 php8.1-gd \
php8.1-curl \
php8.1-imap php8.1-mysql php8.1-mbstring \
php8.1-xml php8.1-zip php8.1-bcmath php8.1-soap \
php8.1-intl php8.1-readline \
php8.1-ldap \
php8.1-msgpack php8.1-igbinary php8.1-redis php8.1-swoole \
php8.1-memcached php8.1-pcov php8.1-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.1
RUN groupadd --force -g 1000 sailRoot
RUN useradd -ms /bin/bash --no-user-group -g 1000 -u 1337 sailRoot
RUN usermod -aG sudo sail
RUN usermod -aG sudo sailRoot
COPY scheduler /etc/cron.d/scheduler
RUN chmod 0644 /etc/cron.d/scheduler \
&& crontab /etc/cron.d/scheduler
COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.1/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container
RUN chown -R www-data:www-data /var/www
EXPOSE 8000
ENTRYPOINT ["start-container"]
Here's start-container entrypoint
#!/usr/bin/env bash
if [ ! -z "1000" ]; then
usermod -u 1000 sail
fi
if [ ! -d /.composer ]; then
mkdir /.composer
fi
chmod -R ugo+rw /.composer
if [ $# -gt 0 ]; then
exec gosu 1000 "$#"
else
exec /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
fi
set -xe
cd /var/www/html
chmod -R 755 app
chmod -R 755 public
chmod -R 755 config
chmod -R 755 storage
chmod -R 777 storage/logs
chmod -R 755 database
chmod -R 755 bootstrap
chmod -R 777 resources
chmod -R 755 routes
chmod -R 755 composer.json
chmod -R 755 composer.lock
But if Laravel tries to create a new log file I get
The stream or file "/var/www/html/storage/logs/laravel-2022-11-05.log" could not be opened in append mode: Failed to open stream: Permission denied
How to solve this permission problem? Now I fix it manually by using SSH chmod -R 777 storage/logs but need a better solution via docker.

How to use Jmeter with Docker in a Virtual Machine?

I am working on Jmeter, that I have to install in a Virtual Machine. To install applications, we usually use Docker. Thus, I'd like to know if it's possible to install in my Virtual Machine the Jmeter application.
I have tried to run a lot of Docker Images but I can't understand anything about it... I tried to run and pull with docker this image : https://hub.docker.com/r/justb4/jmeter/ but when I tried to run it, I can't on GUI mode...
The thing is that I have a test plan test.jmx that I worked on in my PC, and I would like to create it in my VM (with the GUI Jmeter mode) or to export it in my VM so that I could launch it.
I don't know if anyone could help me!
Thank you in advance. Have a nice day.
To install applications, we usually use Docker.
this is very weird use case for Docker.
If you want to use Docker for running JMeter in GUI mode for tests development I doubt that you will be able to find a suitable image in the DockerHub, most probably you will need to build one yourself, an example Dockerfile would be something like:
FROM alpine:edge
ENV DISPLAY :99
ENV RESOLUTION 1366x768x24
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
&& apk add --no-cache curl xfce4-terminal xvfb x11vnc xfce4 openjdk8-jre bash xrdp \
&& curl -L https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz > /tmp/jmeter.tgz \
&& tar -xvf /tmp/jmeter.tgz -C /opt \
&& rm /tmp/jmeter.tgz \
&& curl -L https://jmeter-plugins.org/get/ > /opt/apache-jmeter-5.2.1/lib/ext/jmeter-plugins-manager.jar \
&& echo "[Globals]" > /etc/xrdp/xrdp.ini \
&& echo "bitmap_cache=true" >> /etc/xrdp/xrdp.ini \
&& echo "bitmap_compression=true" >> /etc/xrdp/xrdp.ini \
&& echo "autorun=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "[jmeter]" >> /etc/xrdp/xrdp.ini \
&& echo "name=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "lib=libvnc.so" >> /etc/xrdp/xrdp.ini \
&& echo "ip=localhost" >> /etc/xrdp/xrdp.ini \
&& echo "port=5900" >> /etc/xrdp/xrdp.ini \
&& echo "username=jmeter" >> /etc/xrdp/xrdp.ini \
&& echo "password=" >> /etc/xrdp/xrdp.ini
EXPOSE 5900
EXPOSE 3389
CMD ["bash", "-c", "rm -f /tmp/.X99-lock && rm -f /var/run/xrdp.pid\
&& nohup bash -c \"/usr/bin/Xvfb :99 -screen 0 ${RESOLUTION} -ac +extension GLX +render -noreset && export DISPLAY=99 > /dev/null 2>&1 &\"\
&& nohup bash -c \"startxfce4 > /dev/null 2>&1 &\"\
&& nohup bash -c \"x11vnc -xkb -noxrecord -noxfixes -noxdamage -display :99 -forever -bg -nopw -rfbport 5900 > /dev/null 2>&1\"\
&& nohup bash -c \"xrdp -p 3389 > /dev/null 2>&1\"\
&& nohup bash -c \"/opt/apache-jmeter-5.2.1/bin/./jmeter -Jjmeter.laf=CrossPlatform > /dev/null 2>&1 &\"\
&& tail -f /dev/null"]
Save the above Dockerfile somewhere on your local drive
Build the image like:
docker build -t jmeter:gui .
Run the image like:
docker run -d -p 3389:3389 -p 5900:5900 jmeter:gui
You should be able to connect to the GUI of the running image using RDP or VNC client
For running your JMeter tests in Docker refer to Make Use of Docker with JMeter - Learn How article

DB2+IBM MQ; enable_MQFunctions = Error -- while connecting to database

I build docker's image containing IBM MQ 9.1, DB2express-c 9.7 + ubuntu 16.04 64bit.
I want to enable MQ functions(sending msg to queue) on my Db2 database.
But when I used enable_MQFunctions than I got this error:
*** Error -- while connecting to TEST
Make sure that user(db2inst1) and password(pass) are valid and that the DB2 instance has started.
*** enable_MQFunction finished with error
Database, user, pass are all okey. And i Don't understand than before this command w/o problems connected to my database
Dockerfile I today used(with only DB2 and IBM MQ, w/o IIB):
# © Copyright IBM Corporation 2015, 2017
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#==============================
#========================
#FROM centos:7
FROM ubuntu:16.04
#FROM ubuntu:17.10
#LABEL maintainer "Arthur Barr <arthur.barr#uk.ibm.com>, Rob Parker <PARROBE#uk.ibm.com>"
#LABEL "ProductID"="98102d16795c4263ad9ca075190a2d4d" \
# "ProductName"="IBM MQ Advanced for Developers" \
# "ProductVersion"="9.0.4"
# The URL to download the MQ installer from in tar.gz format
#oryginal ARG MQ_URL=https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/messaging/mqadv/mqadv_dev904_ubuntu_x86-64.tar.gz
ARG MQ_URL=http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/messaging/mqadv/mqadv_dev910_ubuntu_x86-64.tar.gz
#ARG MQ_URL=http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/messaging/mqadv/mqadv_dev80_linux_x86-64.tar.gz
#ARG MQ_URL=\\172.29.5.249\mqadv_dev910_ubuntu_x86-64.tar.gz
# The MQ packages to install
ARG MQ_PACKAGES="ibmmq-server ibmmq-java ibmmq-jre ibmmq-gskit ibmmq-web ibmmq-msg-.*"
#RUN rm /var/lib/apt/lists/*
RUN apt-get clean -y
RUN apt-get autoclean -y
RUN export DEBIAN_FRONTEND=noninteractive \
# Install additional packages required by MQ, this install process and the runtime scripts
&& apt-get update -y \
&& apt-get install -y --no-install-recommends \
# && yum update -y \
# && yum install -y \
bash \
bc \
ca-certificates \
coreutils \
curl \
debianutils \
file \
findutils \
gawk \
grep \
libc-bin \
lsb-release \
mount \
passwd \
procps \
sed \
tar \
util-linux \
# Download and extract the MQ installation files
&& export DIR_EXTRACT=/tmp/mq \
&& mkdir -p ${DIR_EXTRACT} \
&& cd ${DIR_EXTRACT} \
&& curl -LO $MQ_URL \
&& tar -zxvf ./*.tar.gz \
# Recommended: Remove packages only needed by this script
#
#&& package-cleanup --leaves --all \ <-------moje dodanie
# Recommended: Create the mqm user ID with a fixed UID and group, so that the file permissions work between different images
&& groupadd --system --gid 990 mqm \
&& useradd --system --uid 990 --gid mqm mqm \
&& usermod -G mqm root \
# Find directory containing .deb files
&& export DIR_DEB=$(find ${DIR_EXTRACT} -name "*.deb" -printf "%h\n" | sort -u | head -1) \
# Find location of mqlicense.sh
&& export MQLICENSE=$(find ${DIR_EXTRACT} -name "mqlicense.sh") \
# Accept the MQ license
&& ${MQLICENSE} -text_only -accept \
&& echo "deb [trusted=yes] file:${DIR_DEB} ./" > /etc/apt/sources.list.d/IBM_MQ.list \
# Install MQ using the DEB packages
&& apt-get update \
&& apt-get install -y $MQ_PACKAGES \
# Remove 32-bit libraries from 64-bit container
&& find /opt/mqm /var/mqm -type f -exec file {} \; \
| awk -F: '/ELF 32-bit/{print $1}' | xargs --no-run-if-empty rm -f \
# Remove tar.gz files unpacked by RPM postinst scripts
&& find /opt/mqm -name '*.tar.gz' -delete \
# Recommended: Set the default MQ installation (makes the MQ commands available on the PATH)
&& /opt/mqm/bin/setmqinst -p /opt/mqm -i \
# Clean up all the downloaded files
&& rm -f /etc/apt/sources.list.d/IBM_MQ.list \
&& rm -rf ${DIR_EXTRACT} \
# Apply any bug fixes not included in base Ubuntu or MQ image.
# Don't upgrade everything based on Docker best practices https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run
&& apt-get upgrade -y sensible-utils \
# End of bug fixes
&& rm -rf /var/lib/apt/lists/* \
# Optional: Update the command prompt with the MQ version
&& echo "mq:$(dspmqver -b -f 2)" > /etc/debian_chroot \
&& rm -rf /var/mqm \
# Optional: Set these values for the Bluemix Vulnerability Report
&& sed -i 's/PASS_MAX_DAYS\t99999/PASS_MAX_DAYS\t90/' /etc/login.defs \
&& sed -i 's/PASS_MIN_DAYS\t0/PASS_MIN_DAYS\t1/' /etc/login.defs \
&& sed -i 's/password\t\[success=1 default=ignore\]\tpam_unix\.so obscure sha512/password\t[success=1 default=ignore]\tpam_unix.so obscure sha512 minlen=8/' /etc/pam.d/common-password
#==========db2 expres START====
#FROM centos:7
#MAINTAINER Leo Wu <leow#ca.ibm.com>
###############################################################
#
# System preparation for DB2
#
###############################################################
#********************z iib-mq-db2 git
RUN dpkg --add-architecture i386
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get update && \
apt-get install -y --no-install-recommends \
curl \
bash \
bc \
coreutils \
curl \
debianutils \
findutils \
gawk \
grep \
libc-bin \
lsb-release \
libncurses-dev \
libstdc++6 \
gcc \
binutils \
make \
libpam0g:i386 \
lib32stdc++6 \
lib32gcc1 \
libcurl4-gnutls-dev:i386 \
numactl \
libaio1 \
libxml2 \
mount \
passwd \
procps \
rpm \
sed \
tar \
wget \
util-linux
RUN rm -rf /var/lib/apt/lists/*
RUN apt-get dist-upgrade -y
#******************
RUN groupadd db2iadm1 && useradd -G db2iadm1 db2inst1
# Required packages
#RUN yum install -y \
# vi \
# sudo \
# passwd \
# pam \
# pam.i686 \
# ncurses-libs.i686 \
# file \
# libaio \
# libstdc++-devel.i686 \
# numactl-libs \
# which \
# && yum clean all
ENV DB2EXPRESSC_DATADIR /home/db2inst1/data
# IMPORTANT Note:
# Due to compliance for IBM product, you have to host a downloaded DB2 Express-C Zip file yourself
# Here are suggested steps:
# 1) Please download zip file of db2 express-c from http://www-01.ibm.com/software/data/db2/express-c/download.html
# 2) Then upload it to a cloud storage like AWS S3 or IBM SoftLayer Object Storage
# 3) Acquire a URL and SHA-256 hash of file and pass it via Docker's build time argument facility
ARG DB2EXPRESSC_URL=ftp://ftp.software.ibm.com/software/data/db2/express/db2exc_images/db2exc_970_LNX_x86_64.tar.gz
#ARG DB2EXPRESSC_URL=http://lorenzana.gt/uploads/files/v10.5fp1_linuxx64_expc.tar.gz
#ARG DB2EXPRESSC_URL=\\172.29.5.249\public\image\v10.5fp1_linuxx64_expc.tar.gz
ADD db2expc.rsp /tmp/db2expc.rsp
ADD db2rfe.cfg /home/db2inst1/sqllib/instance/db2rfe.cfg
COPY db2expc.rsp /tmp
RUN curl -fkSLo /tmp/expc.tar.gz $DB2EXPRESSC_URL
RUN cd /tmp && tar xf expc.tar.gz
RUN rm -rf /home/db2inst1/sqllib
RUN mkdir /home/db2inst1/sqllib
RUN su - root -c "chmod -R 1777 /home/db2inst1/"
RUN su - db2inst1 -c "/tmp/expc/db2_install -f sysreq -b /home/db2inst1/sqllib"
# RUN su - db2inst1 -c "/tmp/expc/db2setup -r /tmp/db2expc.rsp" || echo "db2setup failed"
RUN echo '. /home/db2inst1/sqllib/db2profile' >> /home/db2inst1/.bash_profile \
&& rm -rf /tmp/db2* && rm -rf /tmp/expc* \
&& sed -ri 's/(ENABLE_OS_AUTHENTICATION=).*/\1YES/g' /home/db2inst1/sqllib/instance/db2rfe.cfg \
&& sed -ri 's/(RESERVE_REMOTE_CONNECTION=).*/\1YES/g' /home/db2inst1/sqllib/instance/db2rfe.cfg \
&& sed -ri 's/^\*(SVCENAME=db2c_db2inst1)/\1/g' /home/db2inst1/sqllib/instance/db2rfe.cfg \
&& sed -ri 's/^\*(SVCEPORT)=48000/\1=50000/g' /home/db2inst1/sqllib/instance/db2rfe.cfg \
&& mkdir $DB2EXPRESSC_DATADIR && chown db2inst1.db2iadm1 $DB2EXPRESSC_DATADIR
RUN su - db2inst1 -c "db2start && db2set DB2COMM=TCPIP && db2 UPDATE DBM CFG USING DFTDBPATH $DB2EXPRESSC_DATADIR IMMEDIATE && db2 create database db2inst1" \
&& su - db2inst1 -c "db2stop force" \
&& cd /home/db2inst1/sqllib/instance \
&& ./db2rfe -f ./db2rfe.cfg
#COPY docker-entrypoint.sh /entrypoint.sh
#ENTRYPOINT ["/entrypoint.sh"]
#VOLUME $DB2EXPRESSC_DATADIR
#EXPOSE 50000
#=========db2 express END ====
COPY *.sh /usr/local/bin/
COPY *.mqsc /etc/mqm/
COPY admin.json /etc/mqm/
COPY mq-dev-config /etc/mqm/mq-dev-config
RUN chmod +x /usr/local/bin/*.sh
# Always use port 1414 (the Docker administrator can re-map ports at runtime)
# Expose port 9443 for the web console
#VOLUME /home/db2inst1/data
EXPOSE 1414 9443 50000
ENV LANG=en_US.UTF-8
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
#ENTRYPOINT ["mq.sh"]
entrypoint.sh (with MQ and DB2 commands):
#======= start MQ =====
set -e
mq-license-check.sh
echo "----------------------------------------"
source mq-parameter-check.sh
echo "----------------------------------------"
setup-var-mqm.sh
echo "----------------------------------------"
which strmqweb && source setup-mqm-web.sh
echo "----------------------------------------"
mq-pre-create-setup.sh
echo "----------------------------------------"
source mq-create-qmgr.sh
echo "----------------------------------------"
source mq-start-qmgr.sh
echo "----------------------------------------"
source mq-dev-config.sh
echo "----------------------------------------"
source mq-configure-qmgr.sh
echo "----------------------------------------"
exec mq-monitor-qmgr.sh ${MQ_QMGR_NAME}
#======== z MQ - END ======
pid=0
function log_info {
echo -e $(date '+%Y-%m-%d %T')"\e[1;32m $#\e[0m"
}
function log_error {
echo -e >&2 $(date +"%Y-%m-%d %T")"\e[1;31m $#\e[0m"
}
function stop_db2 {
log_info "stopping database engine"
su - db2inst1 -c "db2stop force"
}
function start_db2 {
log_info "starting database engine"
su - db2inst1 -c "db2start"
}
function restart_db2 {
# if you just need to restart db2 and not to kill this container
# use docker kill -s USR1 <container name>
kill ${spid}
log_info "Asked for instance restart doing it..."
stop_db2
start_db2
log_info "database instance restarted on request"
}
function terminate_db2 {
kill ${spid}
stop_db2
if [ $pid -ne 0 ]; then
kill -SIGTERM "$pid"
wait "$pid"
fi
log_info "database engine stopped"
exit 0 # finally exit main handler script
}
trap "terminate_db2" SIGTERM
trap "restart_db2" SIGUSR1
if [ ! -f ~/db2inst1_pw_set ]; then
if [ -z "$DB2INST1_PASSWORD" ]; then
log_error "error: DB2INST1_PASSWORD not set"
log_error "Did you forget to add -e DB2INST1_PASSWORD=... ?"
exit 1
else
log_info "Setting db2inst1 user password..."
(echo "$DB2INST1_PASSWORD"; echo "$DB2INST1_PASSWORD") | passwd db2inst1 > /dev/null 2>&1
if [ $? != 0 ];then
log_error "Changing password for db2inst1 failed"
exit 1
fi
touch ~/db2inst1_pw_set
fi
fi
if [ ! -f ~/db2_license_accepted ];then
if [ -z "$LICENSE" ];then
log_error "error: LICENSE not set"
log_error "Did you forget to add '-e LICENSE=accept' ?"
exit 1
fi
if [ "${LICENSE}" != "accept" ];then
log_error "error: LICENSE not set to 'accept'"
log_error "Please set '-e LICENSE=accept' to accept License before use the DB2 software contained in this image."
exit 1
fi
touch ~/db2_license_accepted
fi
if [[ $1 = "-d" ]]; then
log_info "Initializing container"
start_db2
log_info "Database db2diag log following"
tail -f ~db2inst1/sqllib/db2dump/db2diag.log &
export pid=${!}
while true
do
sleep 10000 &
export spid=${!}
wait $spid
done
else
exec "$1"
fi
and than:
docker run -e LICENSE=accept -e MQ_QMGR_NAME=MQ321 -e DB2INST1_PASSWORD=pass -p 41419:1414 -p 9459:9443 -p 5015:50000 allall4r
And after all, I used command from : HERE
So I executed:
root:
usermod -G mqm db2inst1
/opt/mqm/bin/setmqinst -i -n Installation1 -p /opt/mqm
mqm user:
PATH=$PATH:/opt/mqm/bin
db2inst1 user:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/mqm/lib64
AMT_DATA_PATH=/opt/mqm
db2start
db2 create db testdb
db2 connect to testdb
cd ~/sqllib/cfg/mq
db2 –tvf amtsetup.sql
Upload with all files needed to build this image are here: UPLOAD LINK
Image will be about 3.1GB
I suspect that the cause of your symptom is that the account specified for enable_MQFunctions command line does not have a password at the time that enable_MQFunctions tries to run. You can prove this by looking at db2diag.log to see the exact authentication failure message, and/or by looking at the /etc/passwd entry for that account just before you run enable_MQFunctions.
You can expand the Dockerfile to configure the Db2 for MQ entirely during the docker build instead of running those steps after docker run or in entrypoints. That way you are responsible for all the steps inside the Dockerfile and it will be repeatable without manual intervention after the docker run command. It also means that your built image is pre-baked with all of the required configuration which will then be persistent. You need to have enough competence with scripting in the Dockerfile to get the desired outcome.
When correctly done, the enable_MQFunctions will operate properly during docker build, so if you are getting errors it's because you are doing it incorrectly.
I can successfully configure the database and run enable_MQFunctions all inside the Dockerfile, with these steps below (because of using a non-root install of Db2), so all the configuration is already in the built image.
after installing Db2 and before db2start the Dockerfile should
create /home/db2inst1/sqllib/userprofile (which will run whenever the instance-owner accounts dots in its db2profile from .bash_profile or .profile), to do these steps:
-- append /opt/mqm/lib64 to LD_LIBRARY_PATH
-- export AMT_DATA_PATH=/opt/mqm
-- prepend /opt/mqm/bin on the PATH
chown db2inst1:db2iadm1 /home/db2inst1/sqllib/userprofile
after installing Db2 and before db2start, the Dockerfile should run these steps:
-- db2set DB2COMM=TCPIP
-- db2set DB2ENVLIST=AMT_DATA_PATH
-- db2 -v update dbm cfg using federated yes immediate
set a password for db2inst1 account in the Dockerfile
the Dockerfile can then run db2start, create the database ( i call it sample, you can call it whatever you like) and run the fragment below as user db2inst1 to first create the required objects in the database used by the MQ functions:
su -db2inst1 -c "( db2 -v connect to sample ; \
db2 -tvf /home/db2inst1/sqllib/cfg/mq/amtsetup.sql; \
db2 -v list tables for schema DB2MQ ; \
exit 0 ) "
Notice that you have to run amtsetup.sql in a subshell ,as shown, to explicitly exit 0, because amtsetup.sql always returns non-zero exit code even when it completes successfully. So you want the docker build to continue in that case.
If all the above steps completed successfully and MQ is already successfully installed, later in the Dockerfile you can run the enable_MQFunctions as follows:
I use ARG INSTANCE_PASSWORD to specify the db2inst1 password, which can come from external.
su - db2inst1 -c "( . ./.profile ;\
db2start ;\
db2 -v activate database sample ;\
cd /home/db2inst1/sqllib/cfg ; \
/home/db2inst1/sqllib/bin/enable_MQFunctions -echo -force -n sample -u db2inst1 -p $INSTANCE_PASSWORD ; \
db2stop force ; \
exit 0)"
Problem was with environment variables. My image, after built, can't hold any variable. I try with export prefix but no change. So no password, no good LD_LIBRARY_PATH. Event after I change and logout, variable back to default.
After I used root -> passwd on my account (db2inst1) I can execute enable_MQFunction with good password
Next error is that I dont have valid license for db2..

Failed to Call Access Method Exception when Creating a MedicationOrder in FHIR

I am using this http://fhirtest.uhn.ca/baseDstu2 test FHIR server and it worked okay so far.
Now I am getting an HTTP-500 - Failed to Call Access Method exception.
Anyone has any idea on what has gone wrong?
This happens frequently. Probably because someone tested weird queries or similar that put the server in an unstable status.
I suggest posting a comment in https://chat.fhir.org/#narrow/stream/hapi to get the server restarted,
or install http://hapifhir.io/doc_cli.html which does basically the same but you have full control.
I built a Dockerfile:
FROM debian:sid
MAINTAINER Günter Zöchbauer <guenter#yyy.com>
ENV DEBIAN_FRONTEND noninteractive
RUN \
apt-get -q update && \
DEBIAN_FRONTEND=noninteractive && \
apt-get install --no-install-recommends -y -q \
apt-transport-https \
apt-utils \
wget \
bzip2 \
default-jdk
# net-tools sudo procps telnet
RUN \
apt-get update && \
rm -rf /var/lib/apt/lists/*
https://github.com/jamesagnew/hapi-fhir/releases/download/v2.0/hapi-fhir-2.0-cli.tar.bz2 && \
ADD hapi-* /hapi_fhir_cli/
RUN ls -la
RUN ls -la /hapi_fhir_cli
ADD prepare_server.sh /hapi_fhir_cli/
RUN \
cd /hapi_fhir_cli && \
bash -c /hapi_fhir_cli/prepare_server.sh
ADD start.sh /hapi_fhir_cli/
WORKDIR /hapi_fhir_cli
EXPOSE 5555
ENTRYPOINT ["/hapi_fhir_cli/start.sh"]
Which requires in the same directory as the Dockerfile
prepare_server.sh
#!/usr/bin/env bash
ls -la
./hapi-fhir-cli run-server --allow-external-refs &
while ! timeout 1 bash -c "echo > /dev/tcp/localhost/8080"; do sleep 10; done
./hapi-fhir-cli upload-definitions -t http://localhost:8080/baseDstu2
./hapi-fhir-cli upload-examples -c -t http://localhost:8080/baseDstu2
start.sh
#!/usr/bin/env bash
cd /hapi_fhir_cli
./hapi-fhir-cli run-server --allow-external-refs -p 5555
Build
docker build myname/hapi_fhir_cli_dstu2 -t . #--no-cache
Run
docker run -d -p 5555:5555 [image id from docker build]
Hope this helps.

Dockerfile with entrypoint for executing bash script

I downloaded docker files from official repository (version 2.3), and now I want to build the image and upload some local data (test.json) into the container. It is not enough just to run COPY test.json /usr/share/elasticsearch/data/, because in this case the indexing of data is not done.
What I want to achieve is to be able to run sudo docker run -d -p 9200:9200 -p 9300:9300 -v /home/gosper/tests/tempESData/:/usr/share/elasticsearch/data test/elasticsearch, and after its execution I want to be able to see the mapped data on http://localhost:9200/tests/test/999.
If I use the below-given Dockerfile and *sh script, then I get the following error: Failed to connect to localhost port 9200: Connection refused
This is the Dockerfile from which I build the image:
FROM java:8-jre
# grab gosu for easy step-down from root
ENV GOSU_VERSION 1.7
RUN set -x \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
&& rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true
# https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html
# https://packages.elasticsearch.org/GPG-KEY-elasticsearch
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 46095ACC8548582C1A2699A9D27D666CD88E42B4
ENV ELASTICSEARCH_VERSION 2.3.4
ENV ELASTICSEARCH_REPO_BASE http://packages.elasticsearch.org/elasticsearch/2.x/debian
RUN echo "deb $ELASTICSEARCH_REPO_BASE stable main" > /etc/apt/sources.list.d/elasticsearch.list
RUN set -x \
&& apt-get update \
&& apt-get install -y --no-install-recommends elasticsearch=$ELASTICSEARCH_VERSION \
&& rm -rf /var/lib/apt/lists/*
ENV PATH /usr/share/elasticsearch/bin:$PATH
WORKDIR /usr/share/elasticsearch
RUN set -ex \
&& for path in \
./data \
./logs \
./config \
./config/scripts \
; do \
mkdir -p "$path"; \
chown -R elasticsearch:elasticsearch "$path"; \
done
COPY config ./config
VOLUME /usr/share/elasticsearch/data
COPY docker-entrypoint.sh /
EXPOSE 9200 9300
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
COPY template.json /usr/share/elasticsearch/data/
RUN /bin/bash -c "source /docker-entrypoint.sh"
This is the docker-entrypoint.sh in which I added the line curl -XPOST http://localhost:9200/uniko-documents/document/978-1-60741-503-9 -d "/usr/share/elasticsearch/data/template.json":
#!/bin/bash
set -e
# Add elasticsearch as command if needed
if [ "${1:0:1}" = '-' ]; then
set -- elasticsearch "$#"
fi
# Drop root privileges if we are running elasticsearch
# allow the container to be started with `--user`
if [ "$1" = 'elasticsearch' -a "$(id -u)" = '0' ]; then
# Change the ownership of /usr/share/elasticsearch/data to elasticsearch
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data
set -- gosu elasticsearch "$#"
#exec gosu elasticsearch "$BASH_SOURCE" "$#"
fi
curl -XPOST http://localhost:9200/tests/test/999 -d "/usr/share/elasticsearch/data/test.json"
# As argument is not related to elasticsearch,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$#"
Remove the following from your docker-entrypoint.sh:
curl -XPOST http://localhost:9200/tests/test/999 -d "/usr/share/elasticsearch/data/test.json"
It's running before you exec the service at the end.
In your Dockerfile, move the following after any commands that modify the directory:
VOLUME /usr/share/elasticsearch/data
Once you create a volume, future changes to the directory are typically ignored.
Lastly, in your Dockerfile, this line at the end likely doesn't do what you think, I'd remove it:
RUN /bin/bash -c "source /docker-entrypoint.sh"
The entrypoint.sh should be run when you start the container, not when you're building it.
#Klue in case you still need it.. you need to change the -d option on your curl command to --data-binary. -d strips the newlines. that's why you are getting the errors.

Resources