Is it possible to compare an ENV variable in the Dockerfile with a regex or similar things like contains? - bash

I'd like to check my VERSION argument, if the regex ".*feature.*" finds the exact string in the version which contains a "feature" in it, to make a conditional docker image.
The dockerfile looks like this right now:
FROM docker.INSERTURL.com/fe/plattform-nginx:1.14.0-01
ARG ARTIFACTORY_USER
ARG ARTIFACTORY_PW
ARG VERSION
# Download sources from Repository
ADD https://${ARTIFACTORY_USER}:${ARTIFACTORY_PW}#INSERTURL.com/artifactory/api/npm/angular.npm/angular-frontend-app/-/angular-frontend-app-${VERSION}.tgz app.tar.gz
# Extract and move to nginx html folder
RUN tar -xzf app.tar.gz
RUN mv ./package/dist/angular-frontend-app/* /usr/share/nginx/html
# Start nginx via script, which replaces static urls with environment variables
ADD start.sh /usr/share/nginx/start.sh
RUN chmod +x /usr/share/nginx/start.sh
# Overwrite nginx.conf
ADD nginx.conf /etc/nginx/nginx.conf
# Fix permissions for runtime
RUN chmod 777 /var/log/nginx /usr/share/nginx/html
CMD /usr/share/nginx/start.sh
I'd like it to only download the sources from Artifactory, if the VERSION doesn't contains "feature" in it's name.
I imagine it'd look like this:
FROM docker.INSERTURL.com/fe/plattform-nginx:1.14.0-01
ARG ARTIFACTORY_USER
ARG ARTIFACTORY_PW
ARG VERSION
if [ "$VERSION" = ".*feature.*" ]; then
# Download sources from Repository
ADD https://${ARTIFACTORY_USER}:${ARTIFACTORY_PW}#INSERTURL.com/artifactory/api/npm/angular.npm/angular-frontend-app/-/angular-frontend-app-${VERSION}.tgz app.tar.gz
fi
# Extract and move to nginx html folder
RUN tar -xzf app.tar.gz
RUN mv ./package/dist/angular-frontend-app/* /usr/share/nginx/html
# Start nginx via script, which replaces static urls with environment variables
ADD start.sh /usr/share/nginx/start.sh
RUN chmod +x /usr/share/nginx/start.sh
# Overwrite nginx.conf
ADD nginx.conf /etc/nginx/nginx.conf
# Fix permissions for runtime
RUN chmod 777 /var/log/nginx /usr/share/nginx/html
CMD /usr/share/nginx/start.sh
Do you know, if it's possible to check Dockerfile ARGs and ENVs with regex?

There are no conditionals in Dockerfiles. You can run arbitrary shell code inside a single RUN step, but that's as close as you can get.
If your base image has an HTTP client like curl you could build a combined command:
RUN if [ $(expr "$VERSION" : '.*feature.*') -eq 0 ]; then \
curl -o app.tar.gz https://${ARTIFACTORY_USER}:${ARTIFACTORY_PW}#INSERTURL.com/artifactory/api/npm/angular.npm/angular-frontend-app/-/angular-frontend-app-${VERSION}.tgz \
&& tar -xzf app.tar.gz \
&& mv ./package/dist/angular-frontend-app/* /usr/share/nginx/html \
&& rm -r app.tar.gz package \
; fi
(The expr invocation tries to match $VERSION against that regular expression, and absent any \(...\) match groups, returns the number of characters that matched; that is zero if the regexp does not match.)
You can also consider using multiple Dockerfiles for the different variants, or having an intermediate image with this frontend app installed and then dynamically selecting the FROM line for your final image. Also remember that these credentials will be visible in cleartext in the image's docker history to anyone who eventually gets the built image.

You can do your if + curl in RUN command this should work

Related

Docker container unable to ignore the EntryPoint bash script failure

Bash script:
clonePath=/data/config/
git branch -r | fgrep -v 'origin/HEAD' | sed 's| origin/|git checkout |' > checkoutAllBranches.sh
chmod +x checkoutAllBranches.sh
echo "Fetch branch: `cat checkoutAllBranches.sh`"
./checkoutAllBranches.sh
git checkout master
git remote rm origin
rm checkoutAllBranches.sh
for config_dir in `ls -a`; do
cp -r $config_dir $clonePath/;
done
echo "API Config update complete..."
Dockerfile which issues this script execution
ENTRYPOINT ["sh","config-update-force.sh","|| true"]
The error below causes the container startup failure despite setting the command status to 0 manually using || true
ERROR:
Error:
cp: cannot create regular file '/data/./.git/objects/pack/pack-27a9d...fb5e368e4cf.pack': Permission denied
cp: cannot create regular file '/data/./.git/objects/pack/pack-27a9d...fbae25e368e4cf.idx': Permission denied
I am looking for 2 options here:
Change these file permissions and then store them in the remote with rwx permissions
Do something to the docker file to ignore this script failure error and start the container.
DOCKERFILE:
FROM docker.hub.com/java11-temurin:latest
USER root
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get install -y rsync telnet vim wget git
RUN mkdir -p /opt/config/clone/data
RUN chown -R 1001:1001 /opt/config
USER 1001
ADD build/libs/my-api-config-server.jar .
ADD config-update-force.sh .
USER root
RUN chmod +x config-update-force.sh
USER 1001
EXPOSE 8080
CMD java $BASE_JAVA_OPTS $JAVA_OPTS -jar my-api-config-server.jar
ENTRYPOINT ["sh","config-update-force.sh","|| true"]
BASH SCRIPT:
#!/bin/bash
set +e
set +x
clonePath=/opt/clone/data/data
#source Optumfile.properties
echo "properties loaded: example ${git_host}"
if [ -d my-api-config ]; then
rm -rf my-api-config;
echo "existing my-api-config dir deleted..."
fi
git_url=https://github.com/my-api-config-server
git clone https://github.com/my-api-config-server
cd my-api-config-server
git branch -r | fgrep -v 'origin/HEAD' | sed 's| origin/|git checkout |' > checkoutAllBranches.sh
chmod +x checkoutAllBranches.sh
echo "Fetch branch: `cat checkoutAllBranches.sh`"
./checkoutAllBranches.sh
git checkout master
git remote rm origin
rm checkoutAllBranches.sh
for config_dir in `ls -a`; do
cp -r $config_dir $clonePath/;
done
echo "My API Config update complete..."
When you do in the script...
chmod +x checkoutAllBranches.sh
...than why not before cp
chmod -R +rwx ${clonePath}
...or if the stderr message 'wont impact anything'...
cp -r $config_dir $clonePath/ 2>/dev/null;
...even cp dont copy -verbosly.
?
When your Dockerfile declares an ENTRYPOINT, that command is the only thing the container does. If it also declares a CMD, the CMD is passed as additional arguments to the ENTRYPOINT; it is not run on its own unless the ENTRYPOINT makes sure to execute it.
Shell errors are not normally fatal, and especially if you explicitly set +e, even if a shell command fails the shell script will keep running. You see this in your output where you get multiple cp errors; the first error does not terminate the script.
You need to do two things here. The first is to set the ENTRYPOINT to actually run the CMD; the simplest and most common way to do this is to end the script with
exec "$#"
The second is to remove the || true from the Dockerfile. As you have it written out currently, this is passed as the first argument to the entrypoint wrapper – it is not run through a shell and it is not interpreted as a "or" operator. If your script begins with a "shebang" line and is marked executable (both of these are correct in the question) the you do not explicitly need the sh interpreter.
# must be a JSON array; no additional "|| true" argument; no sh -c wrapper
ENTRYPOINT ["./config-update-force.sh"]
# any valid CMD will work with `exec "$#"
CMD java $BASE_JAVA_OPTS $JAVA_OPTS -jar my-api-config-server.jar

Why does my bash script not run properly on every docker container that starts up?

My Dockerfile copies an init.sh script to the container.
# DOCKERFILE
FROM ubuntu:latest
# a bunch of installation commands
COPY init.sh /
ENTRYPOINT bash init.sh
EXPOSE 80
And I have a docker-compose file with 2 services:
Service1: This service is being scaled.
Service2: Database
I have it so that when the Service1 container starts up, this script will run.
#!/bin/bash
# script
# Missing files directory
if [[ ! -e /var/www/drupal/sites/default/files ]]; then
mkdir /var/www/drupal/sites/default/files
chmod a+w /var/www/drupal/sites/default/files
fi
# Missing settings file
cp /var/www/drupal/sites/default/default.settings.php /var/www/drupal/sites/default/settings.php
chmod a+w /var/www/drupal/sites/default/settings.php
# Install Drush & Install Drupal
cd /var/www/drupal && composer require --dev drush/drush
cd /var/www/drupal && vendor/bin/drush site-install standard \
--db-url=mysql://root:random#mariadb:3306/drupaldb -y \
--site-name=ExampleWebsite \
--account-name=random \
--account-pass=random
# Post-Installation Steps
chmod go-w /var/www/drupal/sites/default/settings.php
chmod go-w /var/www/drupal/sites/default
cd /var/www/drupal && vendor/bin/drush cache-rebuild
/usr/sbin/apache2ctl -D FOREGROUND
However, when I run the command to start up the containers along with --scale docker-compose up -d --scale Service1=5, some of the containers run the script properly on start up but some don't. For the ones that don't, I would have to go into the container and manually run the script, then it's fine.
Shouldn't all the containers be the same and would've run the same script properly?
Instead, I would have to manually go into some of the containers and run the script.

How to use docker ENTRYPOINT with shell script file combine parameter

I write shell script file and use this with docker ENTRYPOINT
but when I run docker image, it just stops without any error log because of entrypoint code line
my Dockerfile
FROM ubuntu:16.04
MAINTAINER limtaegeun <imori333#gmail.com>
RUN apt-get update
RUN apt-get install -y nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
# Define mountable directories.
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
ENV CONTAINER_NAME nodejs
ENV SERVER_NAME myserver.com
ENV PEM_PATH /etc/nginx/certs/cert.pem
ENV KEY_PATH /etc/nginx/certs/cert.key
WORKDIR /etc/nginx
ADD ./sites-available/ssl /etc/nginx/sites-available/ssl
ADD ./docker-entrypoint.sh /etc/nginx/docker-entrypoint.sh
RUN chmod 777 /etc/nginx/docker-entrypoint.sh
EXPOSE 80 443
ENTRYPOINT /etc/nginx/docker-entrypoint.sh ${CONTAINER_NAME} ${SERVER_NAME} ${PEM_PATH} ${KEY_PATH}
CMD ["nginx"]
docker-entrypoint.sh
#!/bin/sh
CONTAINER_NAME=$1
SERVER_NAME=$2
PEM_PATH=$3
KEY_PATH=$4
rm -f /etc/nginx/sites-enabled/default
sed -ri 's#CONTAINER_NAME#'${CONTAINER_NAME}'#' /etc/nginx/sites-available/ssl
sed -ri 's#SERVER_NAME#'${SERVER_NAME}'#' /etc/nginx/sites-available/ssl
sed -ri 's#PEM_PATH#'${PEM_PATH}'#' /etc/nginx/sites-available/ssl
sed -ri 's#KEY_PATH#'${KEY_PATH}'#' /etc/nginx/sites-available/ssl
# cp -f sites-available/ssl sites-available/default
ln -s /etc/nginx/sites-available/ssl /etc/nginx/sites-enabled/default
my docker run command
docker run -v /home/ubuntu/Docker-nginx-cloudflare-ssl-proxy/certs:/etc/nginx/certs \
--name nginx-ssl -p 443:443 -p 80:80 --network nginx-net --rm -d nginx-cloudfare-ssl-proxy
what is the problem??
When a Docker container is run, it runs the ENTRYPOINT (only), passing the CMD as command-line parameters, and when the ENTRYPOINT completes the container exits. In the Dockerfile the ENTRYPOINT has to be JSON-array syntax for it to be able to see the CMD arguments, and the script itself needs to actually run the CMD, typically with a line like exec "$#".
The single simplest thing you can do to clean this up is not to try to go back and forth between environment variables and positional parameters. The ENTRYPOINT script will be able to directly read the ENV variables you set in the Dockerfile (or override with docker run -e options). So if you delete the first lines of the script that set these variables from positional parameters, and make sure to run the CMD
#!/bin/sh
# delete the lines that set CONTAINER_NAME et al.
rm -f /etc/nginx/sites-enabled/default
sed -ri 's#CONTAINER_NAME#'${CONTAINER_NAME}'#' /etc/nginx/sites-available/ssl
...
# and add this at the end
exec "$#"
and then change the Dockerfile to not pass positional parameters but do use JSON-array syntax for ENTRYPOINT
ENTRYPOINT ["/etc/nginx/docker-entrypoint.sh"]
CMD ["nginx"]
that should get you off the ground.
It's worth considering how much of this you actually need to be configurable. For instance, would you ever need a path different from the default /etc/nginx/certs inside the isolated container filesystem space? Usually with the standard nginx Docker Hub image you work with it by injecting an entire complete configuration file and if you choose to do that it simplifies your Docker setup.
Other generic suggestions: remove the VOLUME declarations (they potentially cause confusing behavior later in the Dockerfile and leak anonymous volumes and aren't otherwise necessary); don't make executable files world-writable (chmod 0755, not 0777); RUN apt-get update && apt-get install in the same Dockerfile command.

How to run multiple entrypoint scripts one after another inside docker container?

I am trying to match the host UID with container UID as below.
Dockerfile
RUN addgroup -g 1000 deploy \
&& adduser -D -u 1000 -G deploy -s /bin/sh deploy
USER deploy
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm7","-F"]
entrypoint.sh
whoami # it outputs `deploy`
# Change UID of 'deploy' as per host user UID
HOST_CURRENT_USER_ID=$(stat -c "%u" /var/www/${PROJECT_NAME})
if [ ${HOST_CURRENT_USER_ID} -ne 0 ]; then
gosu root usermod -u ${HOST_CURRENT_USER_ID} deploy
gosu root groupmod -g ${HOST_CURRENT_USER_ID} deploy
fi
whoami # It outputs as unknown user id 1000.
Please note the output of whoami above. Even If I changed the UID of deploy to host uid, the entrypoint script process doesn't get changed as the entrypoint shell has been called by UID 1000.
So I came up in a solution to make two entry point script one is to change the UID and another one is for container's bootstrap process which will be run in a separate shell after I change the UID of deploy. So how can I make two entrypoint run after another. E.g something like
ENTRYPOINT ["/fix-uid.sh && /entrypoint.sh"]
It looks like you're designing a solution very similar to one that I've created. As ErikMD mentions, do not use gosu to switch from a user to root, you want to go the other way, from root to a user. Otherwise, you will have an open security hole inside your container than any user can become root, defeating the purpose of running a container as a different user id.
For the solution that I put together, I have it work whether the container is run in production as just a user with no volume mounts, or in development with volume mounts by initially starting the container as root. You can have an identical Dockerfile, and change the entrypoint to have something along the lines of:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
fix-perms -r -u deploy -g deploy /var/www/${PROJECT_NAME}
exec gosu deploy "$#"
else
exec "$#"
fi
The fix-perms script above is from my base image, and includes the following bit of code:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=`getent passwd "${opt_u}" | cut -f3 -d:`
NEW_UID=`ls -nd "$1" | awk '{print $3}'`
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
(Note, I really like your use of stat -c and will likely be updating my fix-perms script to leverage that over the ls command I have in there now.)
The important part to this is running the container. When you need the fix-perms code to run (which for me is only in development), I start the container as root. This can be a docker run -u root:root ... or user: "root:root" in a compose file. That launches the container as root initially, which triggers the first half of the if/else in the entrypoint that runs fix-perms and then runs a gosu deploy to drop from root to deploy before calling "$#" which is your command (CMD). The end result is pid 1 in the container is now running your command as the deploy user.
As an aside, if you really want an easier way to run multiple entrypoint fragments in a way that's easy to extend with child images, I use an entrypoint.d folder that is processed by an entrypoint script in my base image. To code to implement that logic is as simple as:
for ep in /etc/entrypoint.d/*.sh; do
if [ -x "${ep}" ]; then
echo "Running: ${ep}"
"${ep}"
fi
done
All of this can be seen, along with an example using nginx, at: https://github.com/sudo-bmitch/docker-base
The behavior you observe seems fairly normal: in your entrypoint script, you changed the UID associated with the username deploy, but the two whoami commands are still run with the same user (identified by the UID in the first place, not the username).
For more information about UIDs and GIDs in a Docker context, see e.g. that reference.
Note also that using gosu to re-become root is not a standard practice (see in particular that warning in the upstream doc).
For your use case, I'd suggest removing the USER deploy command and switch user in the very end, by adapting your entrypoint script as follows:
Dockerfile
(…)
RUN addgroup -g 1000 deploy \
&& adduser -D -u 1000 -G deploy -s /bin/sh deploy
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm7","-F"]
entrypoint.sh
#!/bin/sh
whoami # it outputs `root`
# Change UID of 'deploy' as per host user UID
HOST_CURRENT_USER_ID=$(stat -c "%u" /var/www/${PROJECT_NAME})
if [ ${HOST_CURRENT_USER_ID} -ne 0 ]; then
usermod -u ${HOST_CURRENT_USER_ID} deploy
groupmod -g ${HOST_CURRENT_USER_ID} deploy
fi
# don't forget the "exec" builtin
exec gosu ${HOST_CURRENT_USER_ID}:${HOST_CURRENT_USER_ID} "$#"
this can be tested using id, for example:
$ docker build -t test-gosu .
$ docker run --rm -it test-gosu /bin/sh
$ id

How to replace a text in conf file in docker image

I am trying to build a Docker image, where I need to get list of directories separated by comma under a parent directory and set that in a configuration file copied in the container but the text is never replaced in conf file. below is the docker image. or Github Link
FROM ubuntu:16.04
LABEL maintainer="TEST"
RUN apt-get update && apt-get install vim git -y
COPY odoo.conf /etc/odoo/odoo.cfg
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world1
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world2
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world3
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world4
COPY setup.sh /setup.sh
RUN ["chmod", "+x", "/setup.sh"]
CMD ["/setup.sh"]
the search and replace thing happens in setup.sh but entering in shell never shows the replacement. however, if I execute the command /setup.sh in container shell it does the job.
Interested to know, how to do that and what I am doing wrong?
setup.sh
# get addons path
addons_path=`ls -d /mnt/extra-addons/* | paste -d, -s`
# can't use / because directory name contains, using #
sed -i -e "s#__addons__path__#${addons_path}#" /etc/odoo/odoo.cfg
/etc/odoo/odoo.conf
[options]
addons_path = __addons__path__
data_dir = /var/lib/odoo
.......
Expected
/etc/odoo/odoo.conf
[options]
addons_path = /mnt/extra-addons/hellow-world1,/mnt/extra-addons/hellow-world2,/mnt/extra-addons/hellow-world3,/mnt/extra-addons/hellow-world4
data_dir = /var/lib/odoo
## Update
I removed intermediate setup.sh and doing whole thing in Dockerfile which looks like
FROM ubuntu:16.04
LABEL maintainer="TEST"
RUN apt-get update && apt-get install vim git -y
COPY odoo.conf /etc/odoo/odoo.cfg
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world1
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world2
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world3
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world4
ENV addons_path=$(ls -d /mnt/extra-addons/* | paste -d, -s) ## Fails here it sets blank so sed command works but the variable addons_path doesn't have the value probably I am defining variable wrongly?
RUN sed -i -e "s#__addons__path__#$addons_path#" /etc/odoo/odoo.cfg
Try this:
addons_path=$(find /mnt/extra-addons/ -type d -maxdepth 1 | tr '\n' ',')
sed -i -e "s#__addons__path__#${addons_path}#" /etc/odoo/odoo.cfg
This will not work if the file names contain # or newlines.
paste joins two streams into one. You have just one stream. Use tr for example to substitute newline for another character.
Don't parse ls output.
Syntax using ` ` is deprecated, use $( ... ).
I think the trick was to execute .sh file
Not working
CMD ["/setup.sh"]
Working
RUN /bin/bash -c "/setup.sh"
Final Result
FROM ubuntu:16.04
LABEL maintainer="TEST"
RUN apt-get update && apt-get install vim git -y
COPY odoo.conf /etc/odoo/odoo.cfg
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world1
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world2
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world3
RUN git clone https://github.com/kelseyhightower/helloworld.git /mnt/extra-addons/hellow-world4
#ENV addons_path=$(ls -d /mnt/extra-addons/* | paste -d, -s)
#RUN sed -i -e "s#__addons__path__#NEW_PATH#" /etc/odoo/odoo.cfg
COPY setup.sh /setup.sh
RUN ["chmod", "+x", "/setup.sh"]
RUN /bin/bash -c "/setup.sh"
Containers are a wrapper around running a process (that wrapper being namespaces and cgroups). The process being run is defined by the ENTRYPOINT and CMD lines of a Dockerfile. You can override the image's default process to run when you run the container, and for the value of CMD, overriding involves passing a different command after the image name.
So when you're Dockerfile ends with:
COPY setup.sh /setup.sh
RUN ["chmod", "+x", "/setup.sh"]
CMD ["/setup.sh"]
You have defined the default value for CMD in your image. But when you run:
docker build -t docker-test .; docker run -it docker-test bash
The ./setup.sh CMD value is replaced by bash. This means setup.sh is never run.
You can solve this several ways.
You can run your setup.sh as part of the image build. If the script has no dependencies on how the container is being run (e.g. external volumes, environment variables passed in, etc), then this is the better choice.
Move your script to an entrypoint and have it finish by running the command provided. When you define both an entrypoint and a cmd, a container is only going to run a single process, so the behavior of docker is to pass the cmd as arguments to the entrypoint. To run the cmd, you need to do that as part of your entrypoint script.
Option #1 looks like the solution you have done, and the answer I'd recommend:
COPY setup.sh /setup.sh
RUN ["chmod", "+x", "/setup.sh"]
RUN ["/setup.sh"]
CMD bash
You'll want to include the shell at the top of the script so linux knows how to run it:
#!/bin/sh
# The #!/bin/sh above is important, you can also replace that with the path to bash
# get addons path
addons_path=`ls -d /mnt/extra-addons/* | paste -d, -s`
# can't use / because directory name contains, using #
sed -i -e "s#__addons__path__#${addons_path}#" /etc/odoo/odoo.cfg
Option #2 is useful if /mnt/extra-addons/ changes every time you run the container. This looks like:
COPY setup.sh /setup.sh
RUN ["chmod", "+x", "/setup.sh"]
ENTRYPOINT ["/setup.sh"]
CMD ["bash"]
With an additional line added to the setup script:
#!/bin/sh
# get addons path
addons_path=`ls -d /mnt/extra-addons/* | paste -d, -s`
# can't use / because directory name contains, using #
sed -i -e "s#__addons__path__#${addons_path}#" /etc/odoo/odoo.cfg
# this next line runs the passed arguments as pid 1, replacing this script
# this is how you run an entrypoint and fall through to a cmd
exec "$#"

Resources