paketo-buildpack spring-boot bootBuildImage ca-certificates binding issue - spring-boot

I am trying to attach ca-certificates (pem files) to the docker image produced by spring-boot gradle plugin(buildpacks). The command I am using is ./gradlew bootBuildImage. This is working fine locally and is adding the certificates but when I run this from my gitlab pipeline I am getting the below errors:
I added some logging to the pipeline and it seems that even though the files (pem & type) are present and have the appropriate permissions, the pipeline runner is probably not having access to those, hence it fails. I dont see how I can add the files differently or execute a command in the builder to get them via wget/curl..
Here is my build.gradle configuration:
and the pem files are stored like this:
The error is not very helpful and documentation is not very great. Any idea is welcome.
I have added
environment = [
"BP_LOG_LEVEL": "debug"
]
and the /platform related sections (aside from the originally shared log) are here(mentioned a few times) :
EDIT: The certificates I am trying to add are AWS RDS ones. I did try changing the buildpacks builder image in an attempt to use a more appropriate (adoptium) one (containing the AWS root certificate) https://bugs.openjdk.org/browse/JDK-8233223 but with no luck.
I get the impression that it is a gitlab issue and have started exploring passing the pem files to the EKS pod differently.. maybe via SERVICE_BINDING_ROOT and k8s secrets.

Related

Gitlab CI and Xamarin build fails

I've created a complete new Xamarin Forms App project in Visual Studio for Mac and added it to a GitLab repository. After that I created a .gitlab-ci.yml file for setting up my CI build. But the problem is that I get error messages:
error MSB4019: The imported project "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" was not found. Confirm that the expression in the Import declaration "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" is correct, and that the file exists on disk.
This error pops up also for Xamarin.Android.Csharp.targets.
My YML file look like this:
image: mono:latest
stages:
- build
build:
stage: build
before_script:
- msbuild -version
- 'echo BUILDING'
- 'echo NuGet Restore'
- nuget restore 'XamarinFormsTestApp.sln'
script:
- 'echo Cleaning'
- MONO_IOMAP=case msbuild 'XamarinFormsTestApp.sln' $BUILD_VERBOSITY /t:Clean /p:Configuration=Debug
Some help would be appreciated ;)
You will need a mac os host to build Xamarin.iOS application and AFAIK it's not there yet in GitLab. You can find the discussion here and private beta here. For now, I would recommend going your own MacOS Host and registered GitLab runner on that host:
https://docs.gitlab.com/runner/
You can set up the host where you want (VM or Physical device) and install the GitLab runner and Xamarin environment there, tag it and use with the GitLab pipelines as with any other shared runner.
From the comments on your question, it looks like Xamarin isn't available in the mono:latest image, but that's ok because you can create your own docker images to use in Gitlab CI. You will need to have access to a registry, but if you use gitlab.com (opposed to a self-hosted instance) the registry is enabled for all users. You can find more information on that in the docs: https://docs.gitlab.com/ee/user/packages/container_registry/
If you are using self-hosted, the registry is still available (even for free versions) but it has to be enabled by an admin (docs here: https://docs.gitlab.com/ee/administration/packages/container_registry.html).
Another option is to use Docker's own registry, Docker Hub. It doesn't matter what registry you use, but you'll have to have access to one of them so your runners can pull down your image. This is especially true if you're using shared runners that you (or your admins) don't have direct control over. If you can directly control your runners, another option is to build the docker image on all of your runners that need it.
I'm not familiar with Xaramin, but here's how you can create a new Docker image based on mono:latest:
# ./mono-xamarin/Dockerfile
FROM mono:latest # this lets us build off of an existing image rather than starting from scratch. Everything in the mono:latest image will be available in this image
RUN ./install_xamarin.sh # Run whatever you need to in order to install xamarin or anything else you need.
RUN apt-get install git # just an example
Once your Dockerfile is written, you can build it like this:
docker build --file path/to/Dockerfile --tag mono-xamarin:latest
If you build the image on your runners, you can use it immediately like:
# .gitlab-ci.yml
image: mono-xamarin:latest
stages:
- build
...
Otherwise you can now push it to whichever registry you want to use.

How can I properly configure a gcloud account for my Gradle Docker plugin when using GCR?

Our containers are hosted using Google Container Registry, and I am using id "com.bmuschko.docker-java-application" version "3.0.7" to build and deploy docker containers. However, I run into permission issues whenever I try to pull the base image or push the image to GCR (I am able to get to the latter step by pulling the image and having it available locally).
I'm a little bit confused by how I can properly configure a particular GCloud account to be used whenever issuing any Docker related calls over a wire using the plugin.
As a first attempt, I've tried to create a task that precedes and build or push commands:
task gcloudLogin(type:Exec) {
executable "gcloud"
args "auth", "activate-service-account", "--key-file", "$System.env.KEY_FILE"
}
However, this simple wrapper doesn't work as desired. Is there currently a supported way to have this plugin work with GCR?
Got in touch with the maintainers of the gradle docker plugin and we have found this to be a valid solution.

How to get app Incremental deployment to kubernetes

I tried to setup Continuous Deployment using jenkins for own microservice project which is organized as multi-module maven project (each submodule representing a micro service). I use "Incremental build - only build changed modules" in jenkins to avoid unnessesary building, and then use docker-maven-plugin to build docker image. However, how could I do to only redeploy changed images to kubernetes cluster?
You can use local docker image registry.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
You can then push the development images to this registry as a build step and make your kubernetes containers use this registry.
After you are ready, push the image to your production image registry and adjust container manifests to use proper registry.
More info on private registry server: https://docs.docker.com/registry/deploying/
Currently Kubernetes does not provide proper solution for this. But there are few workarounds mentioned [here]: https://github.com/kubernetes/kubernetes/issues/33664
I like this one 'Fake a change to the Deployment by changing something other than the image'. We can do it in this way :
Define environment variable say "TIMESTAMP" and any value to it in deployment manifest. In CI\CD pipeline we set the value to current timestamp and then pass this updated manifest to 'kubectl apply'. This way, we are faking a change and kubernetes will pull the latest image and deploy to the cluster. Please make sure that 'imagePullPolicy : always' is set.

How to deploy with Gitlab-Ci to EC2 using AWS CodeDeploy/CodePipeline/S3

I've been working on a SlackBot project based in Scala using Gradle and have been looking into ways to leverage Gitlab-CI for the purpose of deploying to AWS EC2.
I am able to fully build and test my application with Gitlab-CI.
How can I perform a deployment from Gitlab-CI to Amazon EC2 Using CodeDeploy and CodePipeline?
Answer to follow as a Guide to do this.
I have created a set of sample files to go with the Guide provided below.
These files are available at the following link: https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/
Scope
This guide assumes the following
Gitlab EE hosted project - may work on private CE/EE instances (not tested)
Gitlab as the GIT versioning repository
Gitlab-CI as the Continuous Integration Engine
Existing AWS account
AWS EC2 as the target production or staging system for the deployment
AWS EC2 Instance running Amazon Linux AMI
AWS S3 as the storage facility for deployment files
AWS CodeDeploy as the Deployment engine for the project
AWS CodePipeline as the Pipeline for deployment
The provided .gitlab-ci.yml sample is based on a Java/Scala + Gradle project.
The script is provided as a generic example and will need to be adapted to your specific needs when implementing Continuous Delivery through this method.
The guide will assume that the user has basic knowledge about AWS services and how to perform the necessary tasks.
Note: The guide provided in this sample uses the AWS console to perform tasks. While there are likely CLI equivalent for the tasks performed here, these will not be covered throughout the guide.
Motivation
The motivation for creating these scripts and deployment guide came from the lack of availability of a proper tutorial showing how to implement Continuous Delivery using Gitlab and AWS EC2.
Gitlab introduced their freely available CI engine by partnering with Digital Ocean, which enables user repositories to benefit from good quality CI for free.
One of the main advantages of using Gitlab is that they provide built-in Continuous Integration containers for running through the various steps and validate a build.
Unfortunately, Gitblab nor AWS provide an integration that would allow to perform Continuous Deliver following passing builds.
This Guide and Scripts (https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/) provide a simplified version of the steps that I've undertaken in order to have a successful CI and CD using both Gitlab and AWS EC2 that can help anyone else get started with this type of implementation.
Setting up the environment on AWS
The first step in ensuring a successful Continuous Delivery process is to set up the necessary objects on AWS in order to allow the deployment process to succeed.
AWS IAM User
The initial requirement will be to set up an IAM user:
https://console.aws.amazon.com/iam/home#users
Create a user
Attach the following permissions:
CodePipelineFullAccess
AmazonEC2FullAccess
AmazonS3FullAccess
AWSCodeDeployFullAccess
Inline Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:*",
"codedeploy:*",
"ec2:*",
"elasticloadbalancing:*",
"iam:AddRoleToInstanceProfile",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfilesForRole",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:RemoveRoleFromInstanceProfile",
"s3:*"
],
"Resource": "*"
}
]
}
Generate security credentials
Note: The policies listed above are very broad in scope. You may adjust to your requirements by creating custom policies that limit access only to certain resources.
Note: Please keep these credentials in a safe location. You will need them in a later step.
AWS EC2 instance & Role
Instance Role for CodeDeploy
https://console.aws.amazon.com/iam/home#roles
Create a new Role that will be assigned to your EC2 Instance in order to access S3,
Set the name according to your naming conventions (ie. MyDeploymentAppRole)
Select Amazon EC2 in order to allow EC2 instances to run other AWS services
Attache the following policies:
AmazonEC2FullAccess
AmazonS3FullAccess
AWSCodeDeployRole
Note: The policies listed above are very broad in scope. You may adjust to your requirements by creating custom policies that limit access only to certain resources.
Launch Instance
https://console.aws.amazon.com/ec2/v2/home
Click on Launch Instance and follow these steps:
Select Amazon Linux AMI 2016.03.3 (HVM), SSD Volume Type
Select the required instance type (t2.micro by default)
Next
Select IAM Role to be MyDeploymentAppRole (based on the name created in the previous section)
Next
Select Appropriate Storage
Next
Tag your instance with an appropriate name (ie. MyApp-Production-Instance)
add additional tags as required
Next
Configure Security group as necessary
Next
Review and Launch your instance
You will be provided with the possibility to either generate or use SSH keys. Please select the appropriate applicable method.
Setting up instance environment
Install CodeDeploy Agent
Log into your newly created EC2 instance and follow the instructions:
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html
CodeDeploy important paths:
CodeDeploy Deployment base directory: /opt/codedeploy-agent/deployment-root/
CodeDeploy Log file: /var/log/aws/codedeploy-agent/codedeploy-agent.log
Tip: run tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log to keep track of the deployment in real time.
Install your project prerequisites
If your project has any prerequisites to run, make sure that you install those before running the deployment, otherwise your startup script may fail.
AWS S3 repository
https://console.aws.amazon.com/s3/home
In this step, you will need to create an S3 bucket that will be holding your deployment files.
Simply follow these steps:
Choose Create Bucket
Select a bucket name (ie. my-app-codepipeline-deployment)
Select a region
In the console for your bucket select Properties
Expand the Versioning menu
choose Enable Versioning
AWS CodeDeploy
https://console.aws.amazon.com/codedeploy/home#/applications
Now that the basic elements are set, we are ready to create the Deployment application in CodeDeploy
To create a CodeDeploy deployment application follow these steps:
Select Create New Application
Choose an Application Name (ie. MyApp-Production )
Choose a Deployment Group Name (ie. MyApp-Production-Fleet)
Select the EC2 Instances that will be affected by this deployment - Search by Tags
Under Key Select Name
Under Value Select MyApp-Production-Instance
Under Service Role, Select MyDeploymentAppRole
Click on Create Application
Note: You may assign the deployment to any relevant Tag that applied to the desired instances targeted for deployment. For simplicity's sake, only the Name Tag has been used to choose the instance previously defined.
AWS CodePipeline
https://console.aws.amazon.com/codepipeline/home#/dashboard
The next step is to proceed with creating the CodePipeline, which is in charge of performing the connection between the S3 bucket and the CodeDeploy process.
To create a CodePipeline, follow these steps:
Click on Create Pipeline
Name your pipeline (ie. MyAppDeploymentPipeline )
Next
Set the Source Provider to Amazon S3
set Amazon S3 location to the address of your bucket and target deployment file (ie. s3://my-app-codepipeline-deployment/myapp.zip )
Next
Set Build Provider to None - This is already handled by Gitlab-CI as will be covered later
Next
Set Deployment Provider to AWS CodeDeploy
set Application Name to the name of your CodeDeploy Application (ie. MyApp-Production)
set Deployment Group to the name of your CodeDeploy Deployment Group (ie. MyApp-Production-Fleet )
Next
Create or Choose a Pipeline Service Role
Next
Review and click Create Pipeline
Setting up the environment on Gitlab
Now that The AWS environment has been prepared to receive the application deployment we can proceed with setting up the CI environment and settings to ensure that the code is built and deployed to an EC2 Instance using S3, CodeDeploy and the CodePipeline.
Gitlab Variables
In order for the deployment to work, we will need to set a few environment variables in the project repository.
In your Gitlab Project, navigate to the Variables area for your project and set the following variables:
AWS_DEFAULT_REGION => your AWS region
AWS_SECRET_ACCESS_KEY => your AWS user credential secret key (obtained when you generated the credentials for the user)
AWS_ACCESS_KEY_ID => your AWS user credential key ID (obtained when you generated the credentials for the user)
AWS_S3_LOCATION => the location of your deployment zip file (ie. s3://my-app-codepipeline-deployment/my_app.zip )
These variables will be accessible by the scripts executed by the Gitlab-CI containers.
Startup script
A simple startup script has been provided (https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/blob/master/deploy/extras/my_app.sh) to allow the deployment to perform the following tasks:
Start the application and create a PID file
Check the status of the application through the PID file
Stop the application
You may find this script under deploy/extras/my_app.sh
Creating gitlab-ci.yml
The gitlab-ci.yml file is in charge of performing the Continuous Integration tasks associated with a given commit.
It acts as a simplified group of shell scripts that are organized in stages which correspond to the different phases in your Continuous Integration steps.
For more information on the details and reference, please refer to the following two links:
http://docs.gitlab.com/ce/ci/quick_start/README.html
http://docs.gitlab.com/ce/ci/yaml/README.html
You may validate the syntax of your gitlab-ci.yml file at any time with the following tool: https://gitlab.com/ci/lint
For the purpose of deployment, we will cover only the last piece of the sample provided with this guide:
deploy-job:
# Script to run for deploying application to AWS
script:
- apt-get --quiet install --yes python-pip # AWS CLI requires python-pip, python is installed by default
- pip install -U pip # pip update
- pip install awscli # AWS CLI installation
- $G build -x test -x distTar # # Build the project with Gradle
- $G distZip # creates distribution zip for deployment
- aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION # Uploads the zipfile to S3 and expects the AWS Code Pipeline/Code Deploy to pick up
# requires previous CI stages to succeed in order to execute
when: on_success
stage: deploy
environment: production
cache:
key: "$CI_BUILD_NAME/$CI_BUILD_REF_NAME"
untracked: true
paths:
- build/
# Applies only to tags matching the regex: ie: v1.0.0-My-App-Release
only:
- /^v\d+\.\d+\.\d+-.*$/
except:
- branches
- triggers
This part represents the whole job associated with the deployment following the previous, if any, C.I. stages.
The relevant part associated with the deployment is this:
# Script to run for deploying application to AWS
script:
- apt-get --quiet install --yes python-pip # AWS CLI requires python-pip, python is installed by default
- pip install -U pip # pip update
- pip install awscli # AWS CLI installation
- $G build -x test -x distTar # # Build the project with Gradle
- $G distZip # creates distribution zip for deployment
- aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION # Uploads the zipfile to S3 and expects the AWS Code Pipeline/Code Deploy to pick up
The first step involves installing the python package management system: pip.
pip is required to install AWS CLI, which is necessary to upload the deployment file to AWS S3
In this example, we are using Gradle (defined by the environment variable $G); Gradle provides a module to automatically Zip the deployment files. Depending on the type of project you are deploying this method will be different for generating the distribution zip file my_app.zip.
The aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION command uploads the distribution zip file to the Amazon S3 location that we defined earlier. This file is then automatically detected by CodePipeline, processed and sent to CodeDeploy.
Finally, CodeDeploy performs the necessary tasks through the CodeDeploy agent as specified by the appspec.yml file.
Creating appspec.yml
The appspec.yml defines the behaviour to be followed by CodeDeploy once a deployment file has been received.
A sample file has been provided along with this guide along with sample scripts to be executed during the various phases of the deployment.
Please refer to the specification for the CodeDeploy AppSpec for more information on how to build the appspec.yml file: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
Generating the Deployment ZipFile
In order for CodeDeploy to work properly, you must create a properly generated zip file of your application.
The zip file must contain:
Zip root
appspec.yml => CodeDeploy deployment instructions
deployment stage scripts
provided samples would be placed in the scripts directory in the zip file, would require the presence my_app.sh script to be added at the root of your application directory (ie. my_app directory in the zip)
distribution code - in our example it would be under the my_app directory
Tools such as Gradle and Maven are capable of generating distribution zip files with certain alterations to the zip generation process.
If you do not use such a tool, you may have to instruct Gitlab-CI to generate this zip file in a different manner; this method is outside of the scope of this guide.
Deploying your application to EC2
The final step in this guide is actually performing a successful deployment.
The stages of Continuous integration are defined by the rules set in the gitlab-ci.yml. The example provided with this guide will initiate a deploy for any reference matching the following regex: /^v\d+\.\d+\.\d+-.*$/.
In this case, pushing a Tag v1.0.0-My-App-Alpha-Release through git onto your remote Gitlab would initiate the deployment process. You may adjust these rules as applicable to your project requirements.
The gitlab-ci.yml example provided would perform the following jobs when detecting the Tag v1.0.0-My-App-Alpha-Release:
build job - compile the sources
test job - run the unit tests
deploy-job - compile the sources, generate the distribution zip, upload zip to Amazon S3
Once the distribution zip has been uploaded to Amazon S3, the following steps happen:
CodePipeline detects the change in the revision of the S3 zip file
CodePipeline validates the file
CodePipeline sends signal that the bundle for CodeDeploy is ready
CodeDeploy executes the deployment steps
Start - initialization of the deployment
Application Stop - Executes defined script for hook
DownloadBundle - Gets the bundle file from the S3 repository through the CodePipeline
BeforeInstall - Executes defined script for hook
Install - Copies the contents to the deployment location as defined by the files section of appspec.yml
AfterInstall - Executes defined script for hook
ApplicationStart - Executes defined script for hook
ValidateService - Executes defined script for hook
End - Signals the CodePipeline that the deployment has completed successfully
Successful deployment screenshots:
References
Gitlab-CI QuickStart: http://docs.gitlab.com/ce/ci/quick_start/README.html
Gitlab-CI .gitlab-ci.yml: http://docs.gitlab.com/ce/ci/yaml/README.html
AWS CodePipeline Walkthrough: http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-w.html
Install or Reinstall the AWS CodeDeploy Agent: http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html
AWS CLI Getting Started - Env: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-environment
AppSpec Reference: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
autronix's answer is awesome, although in my case I had to gave up the CodePipeline part due to the following error : The deployment failed because a specified file already exists at this location : /path/to/file. This is because I already have files at the location since I'm using an existing instance with a server running already on it.
Here is my workaround :
In the .gitlab-ci.yml here is what I changed :
deploy:
stage: deploy
script:
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" # Downloading and installing awscli
- unzip awscliv2.zip
- ./aws/install
- aws deploy push --application-name App-Name --s3-location s3://app-deployment/app.zip # Adding revision to s3 bucket
- aws deploy create-deployment --application-name App-Name --s3-location bucket=app-deployment,key=app.zip,bundleType=zip --deployment-group-name App-Name-Fleet --deployment-config-name CodeDeployDefault.OneAtATime --file-exists-behavior OVERWRITE # Ordering the deployment of the new revision
when: on_success
only:
refs:
- dev
The important part is the aws deploy create-deployment line with it's flag --file-exists-behavior. There are three options available, OVERWRITE was the one I needed and I couldn't manage to set this flag with CodePipeline so I went with the cli option.
I've also changed a bit the part for the upload of the .zip. Instead of creating the .zip myself I'm using aws deploy push command which will create a .zip for me on the s3 bucket.
There is really nothing else to modify.

Pushing a tag to private docker registry in artifactory fails from mac

So, we have permissions setup on who can push to the docker registry in artifactory. Now, I created a dockercfg file in $HOME/.dockercfg on my mac and added the username and passing using the curl command:
curl -uaaaaa:bbbbbbb "https://docker.io/v2/auth" >> $HOME/.dockercfg
After that when I try to push an image to docker registry, it fails with the below error:
unauthorized: The client does not have permission to push to the repository.
When I look at the docker request.log in the registry, I see its trying to push as anonymous from my mac. This is very confusing. Even though I have the $HOME/.dockercfg which has a user.
I also tried the docker login docker.io way but that too isnt helping.
It seems that the artifactory docker registry isnt able to find the user info when I am pushing from my mac and shows as anonymous.
My artifactory server version is 4.5.0 and docker 1.12.0-rc2.
Can anybody please help.
To the best of my knowledge Docker 1.12 no longer supports configuration placed under ~/.dockercfg instead it should be placed under ~/.docker/config.json --> In any case the method you're using was relevant for the older docker clients, you should use docker login to authenticate your docker client with Artifactory.
As a side note, Your version of Artifactory is a bit old - newer versions have made significant changes to support the newer Docker versions so you should upgrade before trying again.
Also remember to configure your Reverse Proxy to work with Artifactory and that you also probably need to set up Docker (and your reverse proxy) to use self-signed certificates.

Resources