I'm launching a bash script inside a container to do npm install but I can't see the progress bar
here's the docker command inside a bash script:
docker-compose run --rm my-docker-node-service \
bash < npm-install-empty-node_modules
Here's the code for npm-install-empty-node_modules
if [ ! -d "node_modules" ]; then
echo "node_modules folder doesn't exist"
npm install
exit 0
fi
it prints "node_modules folder doesn't exist" as well as everything else, just not the npm install progress bar, which makes it seem like it has frozen. But if you wait long enough it finished installing.
Related
Project
I am building a docker-compose file for a simple devops stack. One of the tools is Helix Core by Perforce. I am trying to build an Ubuntu Dockerfile, that will install Helix Core and then run it. I have already written a bash script install.sh that when put like this
FROM ubuntu:20.04
COPY ./install.sh /install.sh
ENTRYPOINT["/bin/bash", "/install.sh"]
will work flawlessly.
Breaking Change
The problem is that I need the script to run as a setup step and not every time the container is started. So I tried the following:
FROM ubuntu:20.04
COPY ./install.sh /install.sh
RUN chmod +x /install.sh
SHELL ["/bin/bash", "-c"]
RUN /install.sh
ENTRYPOINT [ "p4d" ]
Problem
Now firstly I do not get any descriptive output in the console. The only thing I get is the default building output.
...
=> CACHED [2/4] COPY ./install.sh /install.sh 0.0s
=> CACHED [3/4] RUN chmod +x /install.sh 0.0s
=> CACHED [4/4] RUN /install.sh 0.0s
=> exporting to image
...
Secondly the script does not seem to execute or fails immediately. (It should take much longer than it does.) Here is the script, it does work in the first Dockerfile, just not in the second.
#!/bin/bash
service_name="${service_name:="master"}"
p4root="${p4root:="/opt/perforce/servers/$service_name"}"
unicode_mode="${unicode_mode:=0}"
case_sensitive="${case_sensitive:=0}"
p4port="${p4port:="1666"}"
super_user_login="${super_user_login:="super"}"
if [ -z "$super_user_password" ]
then
echo "Install aborted!"
echo "Please set 'super_user_password' via environment variable!"
exit
fi
echo "Installing Helix Core..."
echo "Updating Ubuntu..."
apt-get update -y
echo "Installing utilities..."
apt-get install ca-certificates wget gpg curl -y
echo "Downloading public key..."
curl https://package.perforce.com/perforce.pubkey > perforce.pubkey
echo "Adding public key..."
gpg init
gpg -n --import --import-options import-show perforce.pubkey
rm perforce.pubkey
echo "Adding perforce packaging key to keyring..."
wget -qO - https://package.perforce.com/perforce.pubkey | apt-key add -
echo "Adding perforce repository to APT configuration..."
echo "deb http://package.perforce.com/apt/ubuntu focal release" > /etc/apt/sources.list.d/perforce.list
echo "Updating Ubuntu..."
apt-get update -y
echo "Installing..."
apt-get install helix-p4d -y
echo "Install complete! Writing config file..."
/opt/perforce/sbin/configure-helix-p4d.sh $service_name -n -p $p4port -r $p4root -u $super_user_login -P $super_user_password
p4 admin stop
Extra Information
The Dockerfile is being built with docker-compose, but I have already tried out docker build with no success.
My Thoughts
It is my understanding that the only relevant difference between RUN and ENTRYPOINT is, that RUN only executes one time in the lifecycle of a container (in the build phase), while ENTRYPOINT defines the executable that will be started together with the container every time it is started. So I assume the environment that script is called in is the same.
Any ideas to why this behavior occurs and how to fix it are appreciated.
So thanks to #DavidMaze I took another look at something I overlooked before.
You can get output from the build command using --progress (see: Why is docker build not showing any output from commands?)
Problem 1 solved!
From there I found:
#6 [4/4] RUN /install.sh
#6 sha256:26be9fa6818fd9e3a0fb64e95a83b816de790902b1f54da1c123ff19888873ff
#6 0.344 Install aborted!
#6 0.344 Please set 'super_user_password' via environment variable!
#6 DONE 0.4s
Problem 2 identified!
Now... the actual problem is that:
Environment variables and arguments are two different things.
You want environment variables for the execution environment and arguments for the build environment. This webpage explained it to me.
My mistake was trying to use environment variables for the build environment.
Modified dockerfile:
FROM ubuntu:20.04
COPY ./install.sh /install.sh
SHELL ["/bin/bash", "-c"]
RUN chmod +x /install.sh
RUN ["/install.sh", "$super_user_password"]
ENTRYPOINT [ "p4d" ]
and I had to add this line to my script:
super_user_password="$1"
And finally... it works!
I have a set of directories, 10 at the moment that are named client-1, client-2,..., client-10 and 1 directory that is named nestjs-wrapper
I want to iterate over the client directories, enter each of them and fire npm install and node index.js in every one.
I could do it by hand, but the number of clients may increment in the future so I would like to automate this process.
So the flow would be something like this:
in the parent directory I would like to fire nvm use to make sure I have the desired node version
then cd into each directory, fire npm install & node index.js
cd back to parent directory
repeat this until packages are installed in every client directory
run docker-compose up in a detached terminal
cd from parent directory into a nestjs-wrapper and start it in watch mode with npm run start:dev
This is the start of the attempt, it installs the packages in the client directories, now I would somehow need to do the rest of the flow:
pattern="/home/dario/my-folder/client"
for _dir in "${pattern}-"[[:digit:]]*; do
[ -d "$_dir" ] || continue;
pushd "$_dir" && npm install;
done
I would like to start docker-compose from the parent directory in a detached terminal.
To do this, I just created a new script named start-docker.sh in which I only have docker-compose up.
And after that open a separate dir in the parent directory (one that is not named client-) and run npm run start:dev in it.
So it would go something like:
pattern="client"
for _dir in "${pattern}-"[[:digit:]]*; do
[ -d "$_dir" ] || continue;
pushd "$_dir" && npm install && node index.js;
popd;
done
gnome-terminal -- ./start-docker.sh;
pushd nestjs_wrapper && npm run start:dev;
This does the trick, I switched back to relative pathnames. First I iterate over all the client directories and install the packages, then after that I bring up docker-compose and start the wrapper in watch mode.
Following the input from the comments, here is the working solution:
pattern="client"
for _dir in "${pattern}-"[[:digit:]]*; do
[ -d "$_dir" ] || continue;
pushd "$_dir" && npm install && node index.js;
popd;
done
gnome-terminal -- ./start-docker.sh;
pushd nestjs_wrapper && npm run start:dev;
I'm using Laravel 8.x with Sail using PHP 8.0, recently, I actually messed up my compose.json file resulting in issues with the vendor, trying to recreate the project from scratch, I deleted the vendor folder.
Normally, docker-compose would build and create the /path/to/project/vendor/laravel/sail/runtimes/ directory with its appropriate content, but for some reason, I keep getting the following error:
ERROR: build path /path/to/project/vendor/laravel/sail/runtimes/8.0 either does not exist, is not accessible, or is not a valid URL.
I tried using docker system prune and deleting the existing containers manually through the Docker Desktop interface, and I even tried running it with docker-compose build --no-cache, I still get the same error.
Is there a way to fix this or should I just clone my project again and try to build it?
Note: I'm using an old Mac without the possibility of just manually running composer install so any of my interactions with the instance relies on the docker container working.
docker run --rm --interactive --tty --volume C:/path/to/project:/app composer install --ignore-platform-reqs --no-scripts
The standard procedure for setting up any Laravel project should be running composer install, so an inability to do so really ties one's hands here.
However, in this case, where the only way for me to run composer was through docker, I elected to use the laravel.build website to create a new project and copy the vendor folder over. Here's the script:
docker info > /dev/null 2>&1
# Ensure that Docker is running...
if [ $? -ne 0 ]; then
echo "Docker is not running."
exit 1
fi
docker run --rm \
-v $(pwd):/opt \
-w /opt \
laravelsail/php80-composer:latest \
bash -c "laravel new example-app && cd example-app && php ./artisan sail:install --with=mysql,redis,meilisearch,mailhog,selenium"
cd example-app
CYAN='\033[0;36m'
LIGHT_CYAN='\033[1;36m'
WHITE='\033[1;37m'
NC='\033[0m'
echo ""
if sudo -n true 2>/dev/null; then
sudo chown -R $USER: .
echo -e "${WHITE}Get started with:${NC} cd example-app && ./vendor/bin/sail up"
else
echo -e "${WHITE}Please provide your password so we can make some final adjustments to your application's permissions.${NC}"
echo ""
sudo chown -R $USER: .
echo ""
echo -e "${WHITE}Thank you! We hope you build something incredible. Dive in with:${NC} cd example-app && ./vendor/bin/sail up"
fi
After that, running ./vendor/bin/sail up -d && ./vendor/bin/sail composer install fixed the problem.
I am creating a bash script which runs through each of my projects and runs npm run test if the test script exists.
I know that if I get into a project and run npm run it will give me the list of available scripts as follows:
Lifecycle scripts included in www:
start
node server.js
test
mocha --require #babel/register --require dotenv/config --watch-extensions js **/*.test.js
available via `npm run-script`:
dev
node -r dotenv/config server.js
dev:watch
nodemon -r dotenv/config server.js
build
next build
However, I have no idea how to grab that information, see if test is available and then run it.
Here is my current code:
#!/bin/bash
ROOT_PATH="$(cd "$(dirname "$0")" && pwd)"
BASE_PATH="${ROOT_PATH}/../.."
while read MYAPP; do # reads from a list of projects
PROJECT="${MYAPP}"
FOLDER="${BASE_PATH}/${PROJECT}"
cd "$FOLDER"
if [ check here if the command exists ]; then
npm run test
echo ""
fi
done < "${ROOT_PATH}/../assets/apps-manifest"
EDIT:
As mentioned by Marie and James if you only want to run the command if it exists, npm has an option for that:
npm run test --if-present
This way you can have a generic script that work with multiple projects (that may or may not have an specific task) without having the risk of receiving an error.
Source: https://docs.npmjs.com/cli/run-script
EDIT
You could do a grep to check for the word test:
npm run | grep -q test
this return true if the result in npm run contains the word test
In your script it would look like this:
#!/bin/bash
ROOT_PATH="$(cd "$(dirname "$0")" && pwd)"
BASE_PATH="${ROOT_PATH}/../.."
while read MYAPP; do # reads from a list of projects
PROJECT="${MYAPP}"
FOLDER="${BASE_PATH}/${PROJECT}"
cd "$FOLDER"
if npm run | grep -q test; then
npm run test
echo ""
fi
done < "${ROOT_PATH}/../assets/apps-manifest"
It just would be a problem if the word test is in there with another meaning
Hope it helps
The right solution is using the if-present flag:
npm run test --if-present
--if-present doesn't allow you to "check if a npm script exists", but runs the script if it exists. If you have fallback logic this won't suffice. In my case, I want to run npm run test:ci if it exists and if not check for and run, npm run test. Using --if-present would run the test:ci AND test scripts if both exists. By checking if one exists first, we can decide which to run.
Because I have both "test" and "test:ci" scripts, the npm run | grep approach wasn't sufficient. As much as I'd like to do this with strictly npm, I have jq in my environments so I decided to go that route to have precision.
To check for a script named "test:ci":
if [[ $(jq '.scripts["test:ci"]' < package.json;) != null ]]; then
// script exists
fi
The reason why I'm asking is because we're using AWS codebuild & I need to do DB migrations. If a DB migration breaks, I want to cancel the codebuild & rollback the migration which was just made. I've got this part working, all I need to do now is cancel the dockerbuild mid way. How can I do this?
This is my .sh file with the knex migration commands:
#!/bin/bash
echo "running"
function mytest {
"$#"
local status=$?
if [ $status -ne 0 ]; then
knex migrate:rollback
echo "Rolling back knex migrate $1" >&2
exit
fi
return $status
}
mytest knex migrate:latest
Running exit will not cancel/break the docker build.
My Dockerfile (just incase):
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
RUN chmod +x /usr/src/app/migrate.sh
RUN /usr/src/app/migrate.sh
EXPOSE 8080
CMD npm run build && npm start
Running exit will not cancel/break the docker build.
running exit 1 should
Docker should respond to the error codes returned by the RUN shell scripts in said Dockerfile.