-app
--microservices
---micro_a
----Dockerfile
----source
-----worker.py
---micro_b
----Dockerfile
----source
-----worker.py
---utilities
----module.py
I have a GKE project with a number of microservices (micro_a and micro_b in the schema above).
Each microservice is defined in its own directory which includes both the Dockerfile and the microservice-specific code (Dockerfile and source/worker.py).
The microservices also share some common utilities (module.py).
I do not want to have replicas of module.py in all microservices folders.
For this reason, in the Dockerfile I would like to include the following lines:
COPY micro_a/source/. ./
COPY utilities/. ./
and call gcloud build submit at the level of "microservices" folder in the schema above.
If use Docker build, the above method works perfectly.
If I use gcloud build submit it doesn't.
How can I include the content of the parent folder utilities in the Dockerfile COPY command built and submitted with gcloud CLI?
Related
As part of the build process, I need to download some python packages from my private gcs bucket and install them as part of the Dockerfile. This could be done using a multi-stage builds like this one:
FROM google/cloud-sdk:alpine as gcloud
WORKDIR /packages
RUN gsutil cp gs://... /packages
FROM python:3.7.0
COPY --from=gcloud /packages .
but I have big problem with injecting service account from my local machine so that gsutil will be able to perform copy. I tried capturing content of service account json via ARG but it does not work with cloudbuild substitutions due to its json format:
gcloud builds submit . --substitutions _SERVICE_ACCOUNT_CONTENT="$(cat path_to_service_account.json)"
I also though about shared volumes between steps but it does not work with docker build. Any ideas how can I do this with cloudbuild?
My recommendation is to do this outside your Docker Build, and to leverage the Cloud Build built in authentication mechanism (through metadata server) to download the files.
In your cloud build steps
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: ["cp", "gs://...", "./packages"]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/<PROJECT_ID>/......', '.' ]
In your Dockerfile
FROM python:3.7.0
COPY packages/* /packages/
.....
Simply grant the Cloud Build service account to have access to your bucket.
List item
pipelines:
default:
- step:
name: Push changes to Commerce Cloud
script:
- dcu --putAll $OCCS_CODE_LOCATION --node $OCCS_ADMIN_URL --applicationKey $OCCS_APPLICATION_KEY
- step:
name: Publish changes Live Storefront
image: Python 3.5.1
script:
python publishDCUAuthoredChanges.py -u $OCCS_ADMIN_URL -k $OCCS_APPLICATION_KEY
environment variables:
$OCCS_CODE_LOCATION: Path to location of all OCCS code
$OCCS_ADMIN_URL: URL for the administration interface on the target Commerce Cloud instance
$OCCS_APPLICATION_KEY: application key to use to log into the target Commerce Cloud administration interface
So I want to use Azure Dev Repository to CI / CD.
in the above code block if you see I have specified - dcu & python code in two task.
dcu is nodejs third party oracle tool which needed to be used to migrate code to cloud system. I want to know how to use that tool in azure dev ops,
Second python (or) nodejs which I want to invoke to REST api to publish the changes.
So where to place those files and how do we invoke it.
*********** Update **************
I hosted the self pool agent and able to access the system.
Just start executing basic bash code, but end up in two issue -
1) the git extract files from the repository it is going to _work/1/s, not sure how that path is decided. How can I change that location s
2) I did 'pwd' to the correct path but it fails in 'dcu' command. I tried with npm and other few commands it fails. But things like mkdir , rmdir it create & remove folder correctly from the desired path. when I tried the 'dcu' cmd from the terminal manually from the system it works fine as expected.
You can follow below steps to use DCU tool and python in azure pipelines.
1, create a azure git repo to include dcu zip file and your .py files. You can follow the steps in this thread to create a azure git repo and push local files to azure repo.
2, create azure build pipeline. Please check here to create a yaml pipeline. Here is a good tutorial for you to get started.
To create a classic UI pipeline, please choose Use the classic editor in the pipeline setup wizard, and choose start with an Empty job to start with an empty pipeline and add your own steps.(I will use classic UI pipeline in below example.)
3, Click "+" and search for Extract files task to unzip the DCU zip file. Click the 3dots on the Destination folder field to select a destination folder for extracted dcu files. eg. $(agent.builddirectory). Please check my answer in this thread more information about predefined variables
4, click "+" to add a powershell task. Run below script in screenshot to install dcu and run dcu command. For environment variables (like $OCCS_CODE_LOCATION), please click the variables tab in below screenshot to define them
cd $(agent.builddirectory) #the folder where the unzipped dcu files reside. eg. $(agent.builddirectory)
npm install -g
.\dcu.cmd --putAll $(OCCS_CODE_LOCATION) --node $(OCCS_ADMIN_URL) --applicationKey $(OCCS_APPLICATION_KEY)
5, add Use python version task to define a python version to execute your .py file.
6, add Python script task to run your .py file. Click the 3dots on Script path field to locate your publishDCUAuthoredChanges.py file(this py file and the dcu zip file have been pushed to azure git repo in the above step 1).
You should be able to run the script of above question in the azure devops pipeline.
Update:
_work/1/s is the default working folder for the agent. You cannot change it. Though there are ways to change the location where the source code is cloned from git, the tasks' workingdirectory is still from the default folder.
However, You can change the workingdirectory inside the tasks. And there are predefined variables you can use to refer to the places in the agents. For below example:
$(Agent.BuildDirectory) is mapped to c:\agent_work\1
%(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a
$(Build.BinariesDirectory) is mapped to c:\agent_work\1\b
$(Build.SourcesDirectory) is mapped to c:\agent_work\1\s
The .sh scripts in the _temp folder are generated automatically by the agent which contains the scripts in the bash task.
For above dcu command not found error. You can try adding dcu command path to the system variables path for your local machine's environment variables. (path in user variables cannot be found by agent jobs, For the agent use a different user account to connect to local machine)
.
Or you can use the physically path to dcu command in the bash task. For example let's say the dcu.cmd in the c:\dcu\dcu.cmd on local machine. Then in the bash task use below script to run dcu command.
c:/dcu/dcu.cmd --putAll ...
I needed to create a Docker image of a Springboot application and I achieved that by creating a Dockerfile and building it into an image. Then, I used "docker run" to bring up a container. This container is used for all the activities for which my application was written.
My problem, however, is that the JAR file that I have used needs constant changes and that requires me to rebuild the Docker image everytime. Furthermore, I need to take the contents of the earlier running Docker container and transfer it into a container created from the newly built image.
I know this whole process can be written as a Shell script and exected every time I have changes on my JAR file. But, is there any tool I can use to somehow automate it in a simple manner?
Here is my Dockerfile:
FROM java:8
WORKDIR /app
ADD ./SuperApi ./SuperApi
ADD ./config ./config
ADD ./Resources ./Resources
EXPOSE 8000
CMD java -jar SuperApi/SomeName.jar --spring.config.location=SuperApi/application.properties
If you have a JAR file that you need to copy into an otherwise static Docker image, you can use a bind mount to save needing to rebuild repeatedly. This allows for directories to be shared from the host into the container.
Say your project directory (the build location where the JAR file is located) on the host machine is /home/vishwas/projects/my_project, and you need to have the contents placed at /opt/my_project inside the container. When starting the container from the command line, use the -v flag:
docker run -v /home/vishwas/projects/my_project:/opt/my_project [...]
Changes made to files under /home/vishwas/projects/my_project locally will be visible immediately inside the container1, so no need to rebuild (and probably no need to restart) the container.
If using docker-compose, this can be expressed using a volumes stanza under the services listing for that container:
volumes:
- type: bind
source: /home/vishwas/projects/my_project
target: /opt/my_project
This works for development, but later on, it's likely you'll want to bundle the JAR file into the image instead of sharing from the host system (so it can be placed into production). When that time comes, just re-build the image and add a COPY directive to the Dockerfile:
COPY /home/vishwas/projects/my_project /opt/my_project
1: Worth noting that it will default to read/write, so the container will also be able to modify your project files. To mount as read-only, use: docker run -v /home/vishwas/projects/my_project:/opt/my_project:ro
You are looking for docker compose
You can build and start containers with a single command using compose.
I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.
Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.
Windows 10. I have in folder just:
app (directory with many files)
Dockerfile (simpliest docker file)
I run "docker build ." and it just hangs.
If I remove "app" directory. Build runs ok.
In docker file just one line:
FROM node
Didn't find any issues like that. It fills like it tries to scan the directory or something.
Any advice?
UPD: It seems that I should use .dockerignore https://docs.docker.com/engine/reference/builder/#/dockerignore-file
When you run docker build ... the Docker client sends the context (recursive contents of the directory) via REST to the Docker daemon for building. If that context is large, this could take some time (depending on a variety of factors, if your daemon is local / remote, platform maybe, etc...).
How long are you giving it to hang before giving up? Could be that it's still just working? Or could be that the context was so large maybe the client / daemon experienced an issue. Checking the (client / daemon) logs would help debug that.
And yes, a .dockerignore file (basically a .gitignore but for Docker context) is probably what you're looking for, unless you need the contents of the app directory during your build.
Your Dockerfile should be put in the directory that only includes it's build context. For example, if you are building a spring-boot app, you can put the Dockerfile right under /app, as shown in this official docker sample.
Docker's documentation:
In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile.
Warning: Do not use your root directory, /, as the PATH as it causes the build to transfer the entire contents of your hard drive to the Docker daemon.
I've seen that simple docker examples put dockerfile in the root directory, but for complicated examples like the one I posted above, the dockerfile is put only in it's relevant directory. You can dig through the dockersamples repository and find your case.