I'm using bitbucket pipeline to deploy my laravel application, when I push to my repo it start to build and it works perfectly until the docker exec command which will send inline command to execute inside the php container, I get the error
bash: line 3: docker: command not found
which is very wired because when I run the command directly on the same server at the same directory it works perfectly, docker is installed on the server and as you can see inside execute.sh docker-compose works with no issues however when running over the pipeline I get the error, notice the pwd to make sure the command executed in the right directory.
bitbucket-pipelines.yml
image: php:7.3
pipelines:
branches:
testing:
- step:
name: Deploy to Testing
deployment: Testing
services:
- docker
caches:
- composer
script:
- apt-get update && apt-get install -y unzip openssh-client
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer require phpunit/phpunit
- vendor/bin/phpunit laravel/tests/Unit
- ssh do.server.net 'bash -s' < execute.sh
Inside execute.sh it looks like this :
cd /home/docker/docker-laravel
docker-compose build && docker-compose up -d
pwd
docker exec -ti php sh -c "php helpershell.php"
exit
And the output from bitbucket pipeline build result looks like this :
Successfully built 1218483bd067
Successfully tagged docker-laravel_php:latest
Building nginx
Step 1/1 : FROM nginx:latest
---> 4733136e5c3c
Successfully built 4733136e5c3c
Successfully tagged docker-laravel_nginx:latest
Creating php ...
Creating mysql ...
Creating mysql ... done
Creating php ... done
Creating nginx ...
Creating nginx ... done
/home/docker/docker-laravel
bash: line 3: docker: command not found
I think that part of the reason this is happening is because docker-compose and docker are two separate commands. Just because one works does not mean they both work. Also you might want to check the indentation of your bitbucket-pipelines.yaml file because yaml can be pretty finicky.
See here for sample structure: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
Are you defining docker as a service in the bitbucket pipeline, according to the documentation, with a top level definitions entry? Like so:
definitions:
services:
docker:
memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB
Alternatively docker is included and ready to use directly in the image the pipeline is running, then you might try removing the service key from your step as that could be conflicting with the docker installed on the image (and since you haven't instantiated the docker service via the top level definitions entry I've posted above, the pipeline may end up in a state where it thinks docker isn't setup.
Related
After upgrading spring boot to 2.4 we cannot run the final docker image that we create via this script:
script:
- echo $CI_JOB_TOKEN | docker login -u gitlab-ci-token --password-stdin $CI_REGISTRY
- apk add openjdk11
- ./gradlew bootBuildImage --imageName=$DOCKER_IMAGE_INTERMEDIATE
- docker build -f ./docker/Dockerfile --build-arg base_image=$DOCKER_IMAGE_INTERMEDIATE -t $DOCKER_IMAGE_TAGGED .
- docker push $DOCKER_IMAGE_TAGGED
- docker tag $DOCKER_IMAGE_TAGGED $DOCKER_IMAGE_LATEST
- docker push $DOCKER_IMAGE_LATEST
our Dockerfile just creates a folder and chowns it to the CNB user:
# The base_image holds a reference to the image created by ./gradlew bootBuildImage
ARG base_image
FROM ${base_image}
ENV SOME_PATH /var/lib/some/files
USER root
RUN mkdir -p ${SOME_PATH}
RUN chown ${CNB_USER_ID}:${CNB_GROUP_ID} ${SOME_PATH}
USER ${CNB_USER_ID}:${CNB_GROUP_ID}
ENTRYPOINT /cnb/lifecycle/launcher
While this worked fine in spring boot 2.3, we now get this error after upgrading to spring boot 2.4 when trying to run the image:
ERROR: failed to launch: determine start command: when there is no default process a command is required
Edit:
The CI log output shows this line at the end of the bootBuildImage command:
[creator] Setting default process type 'web'
Edit2:
By further inspecting the differences of the images created by bootBuildImage with spring-boot 2.3 and 2.4 I found a hint that the default ENTRYPOINT no longer is /cnb/lifecycle/launcher but /cnb/process/web.
Updating the last line of our Dockerfile to select this entrypoint:
ENTRYPOINT /cnb/process/web
enables us to start the image! yay! :)
However, I leave the question open, because I still wonder why the default process is no longer used by the lifecycle launcher?!
I am writing a bash file where I wrote some scripts to install the spinnaker in Kubernetes cluster (minikube) everything is working fine, the spinnaker is installed now but when I come inside the halyard and want to run few scripts from my bash file then it is coming inside my halyard container but not executing the next commands because I don't know how to run the multiple commands under it. I tried \ and && as well but not working.
These are my commands
kubectl exec --namespace spinnaker -it spinnaker-spinnaker-halyard-0 bash
hal config features edit --artifacts true
hal config artifact github enable
GITHUB_ACCOUNT_NAME=github_user
hal config artifact github account add ${GITHUB_ACCOUNT_NAME} \
--token
hal deploy apply
if I try kubectl exec --namespace spinnaker -it spinnaker-spinnaker-halyard-0 bash \ then it is running the next command (hal config features edit --artifacts true ) but it is showing error "--unknown flag --artifacts".
NOTE: If I run these command manually in the CLI then everything works fine but I want to run these commands from my bash file.
I'm assuming the commands that you want to run are not stored in a file in the container. If you add these commands to a script file(e.g. config-halyard.sh), and mount a persistent volume to the Halyard container(containing this script), you should be able to execute it from outside the container with this command:
kubectl exec --namespace spinnaker -it spinnaker-spinnaker-halyard-0 /bin/bash config-halyard.sh
That is assuming that the script would be in the container's root directory
I apologize if this a silly question but when I type inside PowerShell of Window 10:
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python
bob/python
It works just fine and I receive the following prompt:
root#63eef6ac2b96:/usr/python#
To avoid repeating the command over and over, I build a makefile that has the following command
docker:
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python bob/python
when I try to execute
make docker
I receive the following error
PS C:\Users\Bob\documents\test> make docker
docker run -it -v C:\Users\Bob\Documents\test:/usr/python -w /usr/python
bob/python
c:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from
daemon: the working directory 'C:/MinGW/msys/1.0/python' i
s invalid, it needs to be an absolute path.
See 'c:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
make.exe": *** [docker] Error 125
Any suggestion is greatly appreciated.
You do not have to use a Makefile. Docker compose is what you are looking for.
In brief, you need to create a docker-compose.yml file and inside it describe all the desired steps. I am not aware of your full setup but I will try to provide a skeleton for your docker-compose file.
version: '3.7'(depends on your docker engine version)
services:
python_service(add a name of your choice):
build: build/ (The path of image's Dockerfile)
volumes:
- C:\Users\Bob\Documents\test:/usr/python
working_dir: /usr/python
In the snippet above:
-v flag replaced with volumes section
-w flag replced with working_dir section
How to use:
Now that your docker-compose file is ready, you need to use it. So you do not need to remember/repeat the docker run command, you will simple execute docker-compose up in the directory where your compose file is located and you will have your container up and running.
Note that this is a simple example on how to use docker-compose. It is a powerful feature allowing to start containers from multiple images, creating networks and much more. I would recommend you to read the official documentation for additional information.
I already searched the related questions like here;
How do I get initial admin password for jenkins on Mac?
and here;
How to recover Jenkins password
However, I cannot find a solution for my problem.
I am following the instructions to install jenkins on this link;
https://jenkins.io/doc/book/installing/
and I have run the following commands to install and tried to make it run on my local machine (mac os);
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
It installs it properly but when I get to the login screen it asks for the initial admin password. Because the installation runs in (-d mode) backend, I cannot see the initial password after the installation completes. When I remove -d for the installation, then the installation does not work.
I also checked the shared folder (User/Shared/Jenkins/Home) directory and there was no secrets folder in it. So I created one manually and followed the instructions (in the answers) on this link again;
How do I get initial admin password for jenkins on Mac?
Afterwards, I removed the related docker process and restarted all the installation process from the beginning but I got the same result.
In this case, how can I find this initial admin password or how can I generate it again?
BTW: I am also checking the logs (where in /var/log/jenkins) but it seems that it stopped writing there after my first install attempt and I also couldn't find the initial password there).
docker exec <container_name> cat /var/jenkins_home/secrets/initialAdminPassword
I tried looking into the container's filesystem, but there's no secrets folder in it. But I found the solution in the jenkins documentation here
Docker outputs the initial secret to the console
To view the console use the command
docker logs <container id of jenkins>
output is somemthing like this:
If you are using Mac and Docker installation for Jenkins follow bellow steps to get initial administer password to start authentication in Jenkins Console. Type below command in Terminal.
(Note: This is working, if you have follow default steps in Jenkins documentation to install Jenkins in Docker environment)
Find the running containers
: docker ps
Copy the running containerID
: docker exec -it <containerID> bash
: cd /var/jenkins_home/secrets
: cat initialAdminPassword
Use secret password showing in terminal and used as initial password for Jenkins Console.
If you have installed Jenkins via docker, then the following command can give you the initial admin password. Assuming your container name/docker image name is jenkins
docker exec `docker ps | grep jenkins | awk '{ print $1}' ` cat /var/jenkins_home/secrets/initialAdminPassword
docker exec $(docker ps -q) cat /var/jenkins_home/secrets/initialAdminPassword
For me username was: admin
and you can find password by:
docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword
my container name is jenkins
Can you install docker-compose and docker toolbox on your Mac?
https://docs.docker.com/compose/install/
Try to execute this docker-compose.yml file:
version: '3.1'
services:
blue-ocean:
image: jenkinsci/blueocean:latest
container_name: blue-ocean
restart: always
environment:
TZ: America/Mexico_City
ports:
- 8080:8080
- 50000:50000
tty: true
volumes:
- ./jenkins-data:/var/jenkins_home
- ./sock:/var/run/docker.sock
Only you need to create a folder with a docker-compose.yml file inside and execute the docker-compose up -d command in terminal, then the folders jenkins-data and sock will be created and inside of jenkins-data appear the directory ./jenkins-data/secrets/initialAdminPassword, open this file and copy the content and paste on the input of web view that requires it.
I wish to use GitLab Container Registry to temporary store my newly built Docker image; in order to have Docker function (i.e. docker login, docker build, docker push), I applied docker-in-docker executor; then from GitLab Piplelines error messages, I realize I need to place a Dockerfile at project root:-
$ docker build --pull -t $CONTAINER_TEST_IMAGE .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /builds/xxxxx.com/laravel/Dockerfile: no such file or directory
My Dockerfile includes centos:7, php, nodejs, composer and sass installations. I observe after each commit, GitLab runner will go through the Dockerfile once and install all of them from beginning, which makes the whole build stage very slow and funny - how come I just want to amend 1 word in my project but I need to install so many things again for deployment?
From my imagination, it will be nice if I can pre-build a Docker image from a Dockerfile that contains the installations mentioned above plus Docker (so that docker login, docker build and docker push can work) and stored in the GitLab-runner server, and after each commit, this image can be reused to build the new image to be pushed to GitLab Container Registry.
However, I faced 2 problems:-
1) Even I include Docker installation in the pre-build a Docker image, I cannot systemctl docker start, due to some D-bus problem
Failed to get D-Bus connection: Operation not permitted
moreover some articles also mentioned a docker in container shall not run background services;
2) when I use dind, it will require a Dockerfile at project root; with the pre-build a Docker image, actually I have nothing to do with this Dockerfile at project root; hence is dind a wrong option?
Acutally, what is the proper way to push a Laravel project image to GitLab Container Registry? (where to place those npm install and composer install commands?)
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- deploy
variables:
CONTAINER_TEST_IMAGE: xxxx
CONTAINER_RELEASE_IMAGE: yyyy
before_script:
- docker login -u xxx -p yyy registry.gitlab.com
build:
stage: build
script:
- npm install here?
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
There are many questions in your post. I would like to target them as follows:
You can pre-build a docker image and then use it in your gitlab-ci.yaml file. This can be used to add your specific dependencies.
image: my custom image
services:
- docker:dind
Important to add the service to the configuration.
You problem about trying to run the docker service inside the gitlab-ci.yml. You actually don't need to do that. Gitlab exposes the docker engine to the executor (either via unix:///var/run/docker.sock or tcp://localhost:2375/). Note, that if the runners are executed in a kubernetes environment, you need to specify the DOCKER_HOST as follows:
variable:
DOCKER_HOST: tcp://localhost:2375/
You question about where to place npm install is more a fundamental question about how docker images are build. In short, npm install should be place in the Dockerfile. For a long description, please this for example.
Some references:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/