Jenkinsfile docker exec error executable file not found in $PATH while running ruby tests - ruby

I have a stage in Jenkinsfile to run some ruby test files using rake test. But the tests are taking too long, so I am planning to implement the parallel running of individual tests. But I am getting an error while executing individual tests in parallel stages.
All the test cases are ran using rake test when we are in lib folder.
Individual test cases are ran using ruby test1.rb when we are in lib/test folder.
Currently working Jenkinsfile :
stage('Test Image') {
steps {
script {
sh "docker run --rm --entrypoint '' -v \${AWS_CONFIG_FILE:-/home/ubuntu/.aws/config}:/root/.aws/config:ro -v \${AWS_SHARED_CREDENTIALS_FILE:-/home/ubuntu/.aws/credentials}:/root/.aws/credentials:ro -v ${WORKSPACE}/test-results:/srv/www/lib/test/html_reports ${IMAGE_NAME} rake test"
}
}
}
Modified Jenkinsfile with Parallel tests:
stage('Test the Image') {
parallel {
stage('Test1'){
steps {
script {
sh "docker run --rm --entrypoint '' -v \${AWS_CONFIG_FILE:-/home/ubuntu/.aws/config}:/root/.aws/config:ro -v \${AWS_SHARED_CREDENTIALS_FILE:-/home/ubuntu/.aws/credentials}:/root/.aws/credentials:ro -v ${WORKSPACE}/test-results:/srv/www/lib/test/html_reports docker exec -it ${IMAGE_NAME} bash -c 'cd test && ruby test1.rb'"
}
}
}
stage('Test2'){
steps {
script {
sh "docker run --rm --entrypoint '' -v \${AWS_CONFIG_FILE:-/home/ubuntu/.aws/config}:/root/.aws/config:ro -v \${AWS_SHARED_CREDENTIALS_FILE:-/home/ubuntu/.aws/credentials}:/root/.aws/credentials:ro -v ${WORKSPACE}/test-results:/srv/www/lib/test/html_reports docker exec -it ${IMAGE_NAME} bash -c 'cd test && ruby test2.rb'"
}
}
}
}
}
Error: docker exec\": executable file not found in $PATH":
Docker file:
FROM ruby:2.5.3 as build
RUN apt-get update && \
apt-get install -qy \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN apt-key fingerprint 0EBFCD88
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get update && \
apt-get install -qy \
docker-ce \
docker-ce-cli \
containerd.io \
build-essential \
libldap-dev \
libsasl2-dev \
libxml2-dev \
libxslt-dev \
libldap2-dev \
libsasl2-dev \
git \
jq
RUN curl -fsSL -o /usr/local/bin/aws-vault 'https://github.com/99designs/aws-vault/releases/download/v5.1.2/aws-vault-linux-amd64' && \
chmod 755 /usr/local/bin/aws-vault
RUN gem install bundler -v '2.0.2'
WORKDIR /tmp
COPY Gemfile* ./
RUN bundler install --without=development
FROM ruby:2.5.3-slim
RUN mkdir -p /srv/www/lib /srv/www/tmp/sockets /srv/www/tmp/pids
COPY --from=build /usr/local/bundle /usr/local/bundle
COPY --from=build /usr/local/bin/aws-vault /usr/local/bin/aws-vault
COPY --from=build /usr/bin/docker* /usr/bin/
COPY ops-cli2.rb /srv/www/lib/cli2.rb
ENV AWS_SHARED_CREDENTIALS_FILE=/root/.aws/credentials
WORKDIR /srv/www/lib
COPY . .
RUN ["chmod", "+x", "/srv/www/lib/cli2.rb"]
ENTRYPOINT ["/srv/www/lib/cli2.rb"]

I think you can just remove docker exec -it in your commandline

Related

High Latency is being observed in AWS ARM Graviton Processor in Comparison to AMD Processor for ASGI based Django Application

I am running an Asgi-based Django Application(Rest Framework) using AWS Kubernetes in the production environment. Everything is running fine at AMD Processor(c5.2xlarge, c5.4xlarge). To decrease the cost we are trying to migrate the application to AWS Graviton Processor(c6g.2xlarge, c6g.4xlarge). But we are observing an increase in the 90% latency to 10X.
The command used for running the application -
DD_DJANGO_INSTRUMENT_MIDDLEWARE=false ddtrace-run gunicorn --workers 1 --worker-tmp-dir /dev/shm --log-file=- --thread 2 --bind :8080 --log-level INFO --timeout 5000 asgi:application -k uvicorn.workers.UvicornWorker
I have one more application that is WSGI based and it's working fine at the graviton processor.
Attaching the docker code -
FROM python:3.9-slim
RUN apt update -y
RUN mv /var/lib/dpkg/info/libc-bin.* /tmp/ && apt install libc-bin && mv /tmp/libc-bin.* /var/lib/dpkg/info/
#
### Create a group and user to run our app
## ARG APP_USER=user
## RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
#
## Install packages needed to run your application (not build deps):
## mime-support -- for mime types when serving static files
## postgresql-client -- for running database commands
## We need to recreate the /usr/share/man/man{1..8} directories first because
## they were clobbered by a parent image.
RUN set -ex \
&& RUN_DEPS=" \
libpcre3 \
git \
mime-support \
postgresql-client \
libmagic1\
fail2ban libjpeg-dev libtiff5-dev zlib1g-dev libfreetype6-dev liblcms2-dev libxslt-dev libxml2-dev \
gdal-bin sysstat libpq-dev binutils libproj-dev procps" \
&& seq 1 8 | xargs -I{} mkdir -p /usr/share/man/man{} \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
ADD requirements /requirements
#ADD package.json package.json
#
## Install build deps, then run `pip install`, then remove unneeded build deps all in a single step.
## Correct the path to your production requirements file, if needed.
RUN set -ex \
&& BUILD_DEPS=" \
build-essential \
libpcre3-dev \
libpq-dev \
" \
&& apt-get update && apt-get install -y --no-install-recommends $BUILD_DEPS \
# && npm install --production --no-save \
&& pip install --no-cache-dir -r /requirements/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& rm -rf /var/lib/apt/lists/*
#
RUN rm -rf /requirements
RUN mkdir /code/
WORKDIR /code/
ADD . /code/
COPY ./scripts /scripts
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN groupadd -r user
RUN useradd --no-log-init -r -g user user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
Python Modules -
aiohttp==3.8.1
aiosignal==1.2.0
amqp==5.1.1
anyio==3.6.1
asgiref==3.5.2
asttokens==2.0.5
async-timeout==4.0.2
attrs==21.4.0
aws-requests-auth==0.4.3
Babel==2.9.1
backcall==0.2.0
billiard==3.6.4.0
black==22.6.0
boto3==1.9.62
botocore==1.12.253
bytecode==0.13.0
celery==5.2.7
certifi==2022.6.15
charset-normalizer==2.0.12
click==8.1.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
ddsketch==2.0.4
ddtrace==1.3.0
decorator==5.1.1
Django==4.0.1
django-appconf==1.0.5
django-cache-memoize==0.1.10
django-compressor==3.1
django-compressor-autoprefixer==0.1.0
django-cors-headers==3.11.0
django-datadog-logger==0.5.0
django-elasticsearch-dsl==7.2.2
django-elasticsearch-dsl-drf==0.22.4
django-environ==0.8.1
django-extensions==3.1.5
django-libsass==0.9
django-log-request-id==2.0.0
django-nine==0.2.5
django-prometheus==2.2.0
django-sites==0.11
django-storages==1.12.3
django-uuid-upload-path==1.0.0
django-versatileimagefield==2.2
djangorestframework==3.13.1
djangorestframework-gis==0.18
docutils==0.15.2
elasticsearch==7.17.1
elasticsearch-dsl==7.4.0
executing==0.8.3
frozenlist==1.3.0
geographiclib==1.52
geopy==2.2.0
gunicorn==20.1.0
h11==0.12.0
httpcore==0.14.7
httpx==0.21.3
idna==3.3
ipython==8.0.1
jedi==0.18.1
jmespath==0.10.0
JSON-log-formatter==0.5.1
kombu==5.2.4
libsass==0.21.0
matplotlib-inline==0.1.3
multidict==6.0.2
mypy-extensions==0.4.3
packaging==21.3
parso==0.8.3
pathspec==0.9.0
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.2.0
platformdirs==2.5.2
prometheus-client==0.14.1
prompt-toolkit==3.0.30
protobuf==4.21.2
psycopg2-binary==2.9.3
ptyprocess==0.7.0
pure-eval==0.2.2
Pygments==2.12.0
pyparsing==3.0.9
python-dateutil==2.8.2
python-dotenv==0.19.2
python-magic==0.4.27
pytz==2021.3
rcssmin==1.1.0
regex==2022.1.18
requests==2.27.1
rfc3986==1.5.0
rjsmin==1.2.0
s3transfer==0.1.13
six==1.16.0
sniffio==1.2.0
sqlparse==0.4.2
stack-data==0.3.0
tenacity==8.0.1
tomli==2.0.1
traitlets==5.3.0
typing_extensions==4.3.0
urllib3==1.25.11
uvicorn==0.17.0
vine==5.0.0
wcwidth==0.2.5
whitenoise==5.3.0
yarl==1.7.2
Docker build command -
docker buildx build --push --platform linux/amd64,linux/arm64 -t ${ECR_LATEST_TAGX} -t ${ECR_VERSION_TAGX} --output=type=image --file ../incr.Dockerfile ../

Can't Create System Link in Docker container

I am trying to create a system link on a docker container running ubuntu. I'm using docker-compose to launch the container.
In services.yml I have...
...
services:
test:
depends_on:
- couchbase
build:
context: .
dockerfile: test-dev.dockerfile
user: "node"
working_dir: /var/www/modules/test
volumes:
- ../:/var/www
environment:
- NODE_ENV=development
- NODE_APP_INSTANCE=docker
- DEBUG=test-api:*
command: bash -c 'sh ../../docker/test-dev.init.sh && yarn && yarn dev'
In test-dev.dockerfile I have:
FROM couchbase:community-6.5.1
# Install node and yarn
RUN apt update && \
apt install -y curl apt-transport-https gnupg && \
apt-get install -y nodejs && \
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates && \
curl -sL https://deb.nodesource.com/setup_12.x | bash - && \
apt update && \
apt install -y yarn nodejs
# Add node user
RUN useradd -ms /bin/bash node && usermod -aG sudo node
In test-dev.init.sh I have...
ln -s /opt/couchbase/bin/cbbackupmgr /var/www/modules/dir/bin/cbbackupmgr
I keep getting the following error:
ln: failed to create symbolic link '/var/www/modules/backupmgr/bin/cbbackupmgr'$'\r':
Permission denied
Any suggestions?

how to write the Dockerfile for a Ruby Capybara scraper?

I am trying to write a Dockerfile to run a Ruby Capybara scraper on a docker container. I tested the following code on my host OS. But it is making an error on a docker container.
Dockerfile
FROM ruby:2.6.6
RUN apt-get update -y && \
apt-get install -y xvfb
RUN wget https://ftp.mozilla.org/pub/firefox/releases/80.0.1/linux-x86_64/en-US/firefox-80.0.1.tar.bz2
RUN tar -xjf firefox-80.0.1.tar.bz2
RUN mv firefox /opt/firefox80
RUN ln -s /opt/firefox80/firefox /usr/bin/firefox
RUN ls /opt/firefox80
RUN wget -N https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz
RUN tar -xvzf geckodriver-v0.27.0-linux64.tar.gz
RUN chmod +x geckodriver
RUN mv -f geckodriver /usr/local/share/geckodriver
RUN ln -s /usr/local/share/geckodriver /usr/local/bin/geckodriver
RUN ln -s /usr/local/share/geckodriver /usr/bin/geckodriver
RUN mkdir capybara
WORKDIR /capybara/
COPY . /capybara
RUN bundle install
main.rb
require 'capybara'
require 'capybara/dsl'
require 'selenium-webdriver'
include Capybara::DSL
Capybara.register_driver :selenium_headless_firefox do |app|
browser_options = ::Selenium::WebDriver::Firefox::Options.new()
browser_options.args << '--headless'
Capybara::Selenium::Driver.new(
app,
browser: :firefox,
options: browser_options
)
end
target = "https://maps.google.com/?cid=13666314335012854449"
session = Capybara::Session.new(:selenium_headless_firefox)
session.visit(target)
Gemfile
source 'https://rubygems.org'
gem 'selenium-webdriver'
gem 'capybara', '~>3.30'
gem 'geckodriver-helper'
Error Message on Docker
/usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/response.rb:72:in `assert_ok': invalid argument: can't kill an exited process (Selenium::WebDriver::Error::UnknownError)
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/response.rb:34:in `initialize'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/http/common.rb:88:in `new'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/http/common.rb:88:in `create_response'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/http/default.rb:114:in `request'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/http/common.rb:64:in `call'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/bridge.rb:167:in `execute'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/remote/bridge.rb:102:in `create_session'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/firefox/marionette/driver.rb:44:in `initialize'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/firefox/driver.rb:33:in `new'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/firefox/driver.rb:33:in `new'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver/common/driver.rb:54:in `for'
from /usr/local/bundle/gems/selenium-webdriver-3.142.7/lib/selenium/webdriver.rb:88:in `for'
from /usr/local/bundle/gems/capybara-3.33.0/lib/capybara/selenium/driver.rb:52:in `browser'
from /usr/local/bundle/gems/capybara-3.33.0/lib/capybara/selenium/driver.rb:71:in `visit'
from /usr/local/bundle/gems/capybara-3.33.0/lib/capybara/session.rb:278:in `visit'
This is what I get when I run the main.rb file on a docker container. I am looking forward to any help from the developer community.
I ran the main.rb file by docker run [docker_image] ruby main.rb
The issue is not with Capybara, but with Firefox - the tar.bz2 file you download does not contain its dependencies, which causes it to crash. The easiest solution is to install it via apt. Assuming all your files are in the same directory your Dockerfile should look like:
FROM ruby:2.6.6
WORKDIR /app
COPY . .
RUN apt-get update -y && \
apt-get install -y xvfb firefox-esr && \
wget -N https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz && \
tar -xvzf geckodriver-v0.27.0-linux64.tar.gz && \
chmod +x geckodriver && \
mv -f geckodriver /usr/local/share/geckodriver && \
ln -s /usr/local/share/geckodriver /usr/local/bin/geckodriver && \
ln -s /usr/local/share/geckodriver /usr/bin/geckodriver && \
bundle install && \
apt-get clean && \
rm geckodriver-v0.27.0-linux64.tar.gz && \
rm -rf /var/lib/apt/lists/*
CMD [ "ruby", "/app/main.rb" ]
Then you can run:
docker build -t capybara:latest . # Build image
docker run -it --rm --env DISPLAY=$DISPLAY --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --net=host capybara:latest firefox # Verify Firefox works
docker run -it --rm capybara:latest # Run your script
Note: The second command will only work on Linux, running dockerized Linux GUI applications on Windows is a bit more difficult and requires some additional setup.
Edit:
There is no such thing as installing something
"on Docker". Docker is not an OS. It's an application containerization framework. It can run various operating systems inside the containers (or no OS at all - see base image). Thhat means method of installing something inside a Docker image (or container - not recommended) depends on what's already installed.
In this case your base image ruby:2.6.6 is based on a Debian Buster image (see Dockerfile), so you need to install the browser you need the way you would on regular desktop or server install of the system.
Debian Buster does not come with Chrome, because it's not open source. To install it's open source equivalent - Chromium - modify your Dockerfile as follows:
FROM ruby:2.6.6
WORKDIR /app
COPY . .
RUN apt-get update -y && \
apt-get install -y xvfb chromium && \
wget -N https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz && \
tar -xvzf geckodriver-v0.27.0-linux64.tar.gz && \
chmod +x geckodriver && \
mv -f geckodriver /usr/local/share/geckodriver && \
ln -s /usr/local/share/geckodriver /usr/local/bin/geckodriver && \
ln -s /usr/local/share/geckodriver /usr/bin/geckodriver && \
bundle install && \
apt-get clean && \
rm geckodriver-v0.27.0-linux64.tar.gz && \
rm -rf /var/lib/apt/lists/*
CMD [ "ruby", "/app/main.rb" ]
If you really need Chrome, follow the official documentation (keeping in mind you need to remove archive files after installation). With that said the Dockerfile for Chrome would be:
FROM ruby:2.6.6
WORKDIR /app
COPY . .
RUN apt-get update -y && \
apt-get install -y xvfb && \
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb && \
apt install -y ./google-chrome-stable_current_amd64.deb && \
wget -N https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz && \
tar -xvzf geckodriver-v0.27.0-linux64.tar.gz && \
chmod +x geckodriver && \
mv -f geckodriver /usr/local/share/geckodriver && \
ln -s /usr/local/share/geckodriver /usr/local/bin/geckodriver && \
ln -s /usr/local/share/geckodriver /usr/bin/geckodriver && \
bundle install && \
apt-get clean && \
rm google-chrome-stable_current_amd64.deb && \
rm geckodriver-v0.27.0-linux64.tar.gz && \
rm -rf /var/lib/apt/lists/*
CMD [ "ruby", "/app/main.rb" ]

neo4j 4.0 testing with embedded database DatabaseManagementServiceBuilder is found nowhere

I'm a beginner in neo4j. I'm trying to build tests using embedded neo4j database inside a springboot application. I haven't succeeded since the class DatabaseManagementServiceBuilder is found nowhere Please note I'm using version 4.0.2 Any help please ?
The full classname is org.neo4j.dbms.api.DatabaseManagementServiceBuilder.
Here is a sample class that uses the builder.
I've struggled with using an embedded neo4j db for my tests a few months back as well.
In case you don't find a suitable solution for the embedded version, I ended up starting a real instance of the db...
I adjusted a bit neo4j's official Dockerfile to use jdk instead of jre and was able to run my tests against it.
Here's the Dockerfile, starting from the official 3.4.5-enterprise Dockerfile:
FROM openjdk:8-jdk-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
make && \
rm -rf /var/lib/apt/lists/*
ADD maven-settings.xml /root/.m2/settings.xml
# from official neo4j:3.4.5-enterprise image content (changed base image to use jdk instead of jre)
RUN addgroup --system neo4j && adduser --system --no-create-home --home /var/lib/neo4j --ingroup neo4j neo4j
ENV NEO4J_SHA256=0629f17a99ba90d6900c98f332c775a732cc2ad6298b8df41a2872277b19e6e3 \
NEO4J_TARBALL=neo4j-enterprise-3.4.5-unix.tar.gz \
NEO4J_EDITION=enterprise \
NEO4J_ACCEPT_LICENSE_AGREEMENT=yes \
TINI_VERSION="v0.18.0" \
TINI_SHA256="12d20136605531b09a2c2dac02ccee85e1b874eb322ef6baf7561cd93f93c855"
ARG NEO4J_URI=http://dist.neo4j.org/neo4j-enterprise-3.4.5-unix.tar.gz
RUN apt update \
&& apt install -y \
bash \
curl \
&& curl -L --fail --silent --show-error "https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini" > /sbin/tini \
&& echo "${TINI_SHA256} /sbin/tini" | sha256sum -c --strict --quiet \
&& chmod +x /sbin/tini \
&& curl --fail --silent --show-error --location --remote-name ${NEO4J_URI} \
&& echo "${NEO4J_SHA256} ${NEO4J_TARBALL}" | sha256sum -c --strict --quiet \
&& tar --extract --file ${NEO4J_TARBALL} --directory /var/lib \
&& mv /var/lib/neo4j-* /var/lib/neo4j \
&& rm ${NEO4J_TARBALL} \
&& mv /var/lib/neo4j/data /data \
&& chown -R neo4j:neo4j /data \
&& chmod -R 777 /data \
&& chown -R neo4j:neo4j /var/lib/neo4j \
&& chmod -R 777 /var/lib/neo4j \
&& ln -s /data /var/lib/neo4j/data
# Install latest su-exec
RUN set -ex; \
\
curl -o /usr/local/bin/su-exec.c https://raw.githubusercontent.com/ncopa/su-exec/master/su-exec.c; \
\
fetch_deps='gcc libc-dev'; \
apt-get update; \
apt-get install -y --no-install-recommends $fetch_deps; \
rm -rf /var/lib/apt/lists/*; \
gcc -Wall \
/usr/local/bin/su-exec.c -o/usr/local/bin/su-exec; \
chown root:root /usr/local/bin/su-exec; \
chmod 0755 /usr/local/bin/su-exec; \
rm /usr/local/bin/su-exec.c; \
\
apt-get purge -y --auto-remove $fetch_deps
ENV PATH /var/lib/neo4j/bin:$PATH
ARG NEO4J_AUTH=neo4j/neo4jtest
ENV NEO4J_AUTH=${NEO4J_AUTH}
WORKDIR /var/lib/neo4j
VOLUME /data
COPY docker-entrypoint.sh /docker-entrypoint.sh
EXPOSE 7474 7473 7687
ENTRYPOINT ["/sbin/tini", "-g", "--", "/docker-entrypoint.sh"]
CMD ["neo4j"]
I used the original docker-entrypoint.sh script.

Failed to Call Access Method Exception when Creating a MedicationOrder in FHIR

I am using this http://fhirtest.uhn.ca/baseDstu2 test FHIR server and it worked okay so far.
Now I am getting an HTTP-500 - Failed to Call Access Method exception.
Anyone has any idea on what has gone wrong?
This happens frequently. Probably because someone tested weird queries or similar that put the server in an unstable status.
I suggest posting a comment in https://chat.fhir.org/#narrow/stream/hapi to get the server restarted,
or install http://hapifhir.io/doc_cli.html which does basically the same but you have full control.
I built a Dockerfile:
FROM debian:sid
MAINTAINER Günter Zöchbauer <guenter#yyy.com>
ENV DEBIAN_FRONTEND noninteractive
RUN \
apt-get -q update && \
DEBIAN_FRONTEND=noninteractive && \
apt-get install --no-install-recommends -y -q \
apt-transport-https \
apt-utils \
wget \
bzip2 \
default-jdk
# net-tools sudo procps telnet
RUN \
apt-get update && \
rm -rf /var/lib/apt/lists/*
https://github.com/jamesagnew/hapi-fhir/releases/download/v2.0/hapi-fhir-2.0-cli.tar.bz2 && \
ADD hapi-* /hapi_fhir_cli/
RUN ls -la
RUN ls -la /hapi_fhir_cli
ADD prepare_server.sh /hapi_fhir_cli/
RUN \
cd /hapi_fhir_cli && \
bash -c /hapi_fhir_cli/prepare_server.sh
ADD start.sh /hapi_fhir_cli/
WORKDIR /hapi_fhir_cli
EXPOSE 5555
ENTRYPOINT ["/hapi_fhir_cli/start.sh"]
Which requires in the same directory as the Dockerfile
prepare_server.sh
#!/usr/bin/env bash
ls -la
./hapi-fhir-cli run-server --allow-external-refs &
while ! timeout 1 bash -c "echo > /dev/tcp/localhost/8080"; do sleep 10; done
./hapi-fhir-cli upload-definitions -t http://localhost:8080/baseDstu2
./hapi-fhir-cli upload-examples -c -t http://localhost:8080/baseDstu2
start.sh
#!/usr/bin/env bash
cd /hapi_fhir_cli
./hapi-fhir-cli run-server --allow-external-refs -p 5555
Build
docker build myname/hapi_fhir_cli_dstu2 -t . #--no-cache
Run
docker run -d -p 5555:5555 [image id from docker build]
Hope this helps.

Resources