The reason why I'm asking is because we're using AWS codebuild & I need to do DB migrations. If a DB migration breaks, I want to cancel the codebuild & rollback the migration which was just made. I've got this part working, all I need to do now is cancel the dockerbuild mid way. How can I do this?
This is my .sh file with the knex migration commands:
#!/bin/bash
echo "running"
function mytest {
"$#"
local status=$?
if [ $status -ne 0 ]; then
knex migrate:rollback
echo "Rolling back knex migrate $1" >&2
exit
fi
return $status
}
mytest knex migrate:latest
Running exit will not cancel/break the docker build.
My Dockerfile (just incase):
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
RUN chmod +x /usr/src/app/migrate.sh
RUN /usr/src/app/migrate.sh
EXPOSE 8080
CMD npm run build && npm start
Running exit will not cancel/break the docker build.
running exit 1 should
Docker should respond to the error codes returned by the RUN shell scripts in said Dockerfile.
Related
Project
I am building a docker-compose file for a simple devops stack. One of the tools is Helix Core by Perforce. I am trying to build an Ubuntu Dockerfile, that will install Helix Core and then run it. I have already written a bash script install.sh that when put like this
FROM ubuntu:20.04
COPY ./install.sh /install.sh
ENTRYPOINT["/bin/bash", "/install.sh"]
will work flawlessly.
Breaking Change
The problem is that I need the script to run as a setup step and not every time the container is started. So I tried the following:
FROM ubuntu:20.04
COPY ./install.sh /install.sh
RUN chmod +x /install.sh
SHELL ["/bin/bash", "-c"]
RUN /install.sh
ENTRYPOINT [ "p4d" ]
Problem
Now firstly I do not get any descriptive output in the console. The only thing I get is the default building output.
...
=> CACHED [2/4] COPY ./install.sh /install.sh 0.0s
=> CACHED [3/4] RUN chmod +x /install.sh 0.0s
=> CACHED [4/4] RUN /install.sh 0.0s
=> exporting to image
...
Secondly the script does not seem to execute or fails immediately. (It should take much longer than it does.) Here is the script, it does work in the first Dockerfile, just not in the second.
#!/bin/bash
service_name="${service_name:="master"}"
p4root="${p4root:="/opt/perforce/servers/$service_name"}"
unicode_mode="${unicode_mode:=0}"
case_sensitive="${case_sensitive:=0}"
p4port="${p4port:="1666"}"
super_user_login="${super_user_login:="super"}"
if [ -z "$super_user_password" ]
then
echo "Install aborted!"
echo "Please set 'super_user_password' via environment variable!"
exit
fi
echo "Installing Helix Core..."
echo "Updating Ubuntu..."
apt-get update -y
echo "Installing utilities..."
apt-get install ca-certificates wget gpg curl -y
echo "Downloading public key..."
curl https://package.perforce.com/perforce.pubkey > perforce.pubkey
echo "Adding public key..."
gpg init
gpg -n --import --import-options import-show perforce.pubkey
rm perforce.pubkey
echo "Adding perforce packaging key to keyring..."
wget -qO - https://package.perforce.com/perforce.pubkey | apt-key add -
echo "Adding perforce repository to APT configuration..."
echo "deb http://package.perforce.com/apt/ubuntu focal release" > /etc/apt/sources.list.d/perforce.list
echo "Updating Ubuntu..."
apt-get update -y
echo "Installing..."
apt-get install helix-p4d -y
echo "Install complete! Writing config file..."
/opt/perforce/sbin/configure-helix-p4d.sh $service_name -n -p $p4port -r $p4root -u $super_user_login -P $super_user_password
p4 admin stop
Extra Information
The Dockerfile is being built with docker-compose, but I have already tried out docker build with no success.
My Thoughts
It is my understanding that the only relevant difference between RUN and ENTRYPOINT is, that RUN only executes one time in the lifecycle of a container (in the build phase), while ENTRYPOINT defines the executable that will be started together with the container every time it is started. So I assume the environment that script is called in is the same.
Any ideas to why this behavior occurs and how to fix it are appreciated.
So thanks to #DavidMaze I took another look at something I overlooked before.
You can get output from the build command using --progress (see: Why is docker build not showing any output from commands?)
Problem 1 solved!
From there I found:
#6 [4/4] RUN /install.sh
#6 sha256:26be9fa6818fd9e3a0fb64e95a83b816de790902b1f54da1c123ff19888873ff
#6 0.344 Install aborted!
#6 0.344 Please set 'super_user_password' via environment variable!
#6 DONE 0.4s
Problem 2 identified!
Now... the actual problem is that:
Environment variables and arguments are two different things.
You want environment variables for the execution environment and arguments for the build environment. This webpage explained it to me.
My mistake was trying to use environment variables for the build environment.
Modified dockerfile:
FROM ubuntu:20.04
COPY ./install.sh /install.sh
SHELL ["/bin/bash", "-c"]
RUN chmod +x /install.sh
RUN ["/install.sh", "$super_user_password"]
ENTRYPOINT [ "p4d" ]
and I had to add this line to my script:
super_user_password="$1"
And finally... it works!
I'm using Laravel 8.x with Sail using PHP 8.0, recently, I actually messed up my compose.json file resulting in issues with the vendor, trying to recreate the project from scratch, I deleted the vendor folder.
Normally, docker-compose would build and create the /path/to/project/vendor/laravel/sail/runtimes/ directory with its appropriate content, but for some reason, I keep getting the following error:
ERROR: build path /path/to/project/vendor/laravel/sail/runtimes/8.0 either does not exist, is not accessible, or is not a valid URL.
I tried using docker system prune and deleting the existing containers manually through the Docker Desktop interface, and I even tried running it with docker-compose build --no-cache, I still get the same error.
Is there a way to fix this or should I just clone my project again and try to build it?
Note: I'm using an old Mac without the possibility of just manually running composer install so any of my interactions with the instance relies on the docker container working.
docker run --rm --interactive --tty --volume C:/path/to/project:/app composer install --ignore-platform-reqs --no-scripts
The standard procedure for setting up any Laravel project should be running composer install, so an inability to do so really ties one's hands here.
However, in this case, where the only way for me to run composer was through docker, I elected to use the laravel.build website to create a new project and copy the vendor folder over. Here's the script:
docker info > /dev/null 2>&1
# Ensure that Docker is running...
if [ $? -ne 0 ]; then
echo "Docker is not running."
exit 1
fi
docker run --rm \
-v $(pwd):/opt \
-w /opt \
laravelsail/php80-composer:latest \
bash -c "laravel new example-app && cd example-app && php ./artisan sail:install --with=mysql,redis,meilisearch,mailhog,selenium"
cd example-app
CYAN='\033[0;36m'
LIGHT_CYAN='\033[1;36m'
WHITE='\033[1;37m'
NC='\033[0m'
echo ""
if sudo -n true 2>/dev/null; then
sudo chown -R $USER: .
echo -e "${WHITE}Get started with:${NC} cd example-app && ./vendor/bin/sail up"
else
echo -e "${WHITE}Please provide your password so we can make some final adjustments to your application's permissions.${NC}"
echo ""
sudo chown -R $USER: .
echo ""
echo -e "${WHITE}Thank you! We hope you build something incredible. Dive in with:${NC} cd example-app && ./vendor/bin/sail up"
fi
After that, running ./vendor/bin/sail up -d && ./vendor/bin/sail composer install fixed the problem.
I'm launching a bash script inside a container to do npm install but I can't see the progress bar
here's the docker command inside a bash script:
docker-compose run --rm my-docker-node-service \
bash < npm-install-empty-node_modules
Here's the code for npm-install-empty-node_modules
if [ ! -d "node_modules" ]; then
echo "node_modules folder doesn't exist"
npm install
exit 0
fi
it prints "node_modules folder doesn't exist" as well as everything else, just not the npm install progress bar, which makes it seem like it has frozen. But if you wait long enough it finished installing.
i have setup a distant git repository which trigger a build of my applicatation on the pre-receive hook.
my application is made with play framework and build using sbt universal:packageZipTarball
i use supervisord to manage processes such as the builded application runner , nginx , ...
i've tryed running the build command in the pre-receive script and directly as a supervisord process but i can't get any output from it and it seems stucked as my application is not builded ...
my supervisord conf file :
sbt-build.conf
[program:sbtbuild]
priority=100
directory=/app/data/build
command=bash sbt-build.sh
user=root
autostart=false
autorestart=false
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=2048
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=2048
the build script
sbt-build.sh
#!/bin/bash
set -eu
BUILD_PATH="/app/data/build"
WEBSITE_PATH="/app/data/website"
echo "=> Build application..."
chmod -R +rwx $BUILD_PATH
cd $BUILD_PATH
sbt universal:packageZipTarball # The script is stuck here...
echo "=> Unpack build..."
cd $BUILD_PATH/target/universal
tar -zxvf *.tgz --strip 1 -C $WEBSITE_PATH
chmod +x $WEBSITE_PATH
cd $WEBSITE_PATH/bin
rm *.bat
mv * start
echo "=> Run application"
supervisorctl restart sbt
the git hook is working as expected and is setup to triger supervisorctl start sbtbuild
last output i get is : Build application... from the echo cmd in sbt-build.sh
Also when i run the sbt-build.sh script manually it work as expected
i finally found the solution here with some explanation.
adding -Djline.terminal=jline.UnsupportedTerminalto the sbt command do the trick.
my start command looks like this now sbt -Djline.terminal=jline.UnsupportedTerminal universal:packageZipTarball
I have to run maven and angular build inside a docker container with sh command as beolw:
docker run -v maven command and then ng build.
How to run/concatenate both commands in single docker run command?
The solution to this is :
sh """
docker run -v ....\\
"maven command" \\
&& cd directory && ng build'
"""
That worked for me.
In order to run multiple commands in docker, use /bin/bash -c with a semicolon ;
In this case, the second command ng build will be executed only if the first command (cd) returns no error or exit status. To avoid this use && instead of ; (semi-colon)
docker run image /bin/bash -c "cd directory && ng build"