For running docker build of my Java application I need sensitive information (a password to access the nexus maven repository).
What is the best way to make it available to the docker build process?
I thought about adding the ~/.m2/settings.xml to the container but it lies outside of the current directory/context and ADD is not able to access it.
UPDATE: in my current setup I need the credentials to run the build and create the image, not when running the container later based on the created image
You probably want to look into mounting a volume from HOST into the container
Related
Apparently i have a basic spring boot application that i need to deploy in openshift. The openshift contains aan application.yml/application.properties in it in a var/config directory. After deployment of the application in openshift, I need to read the propert/yml file from the directory in my application. Is there any process how to do the same?
How are you building your container? Are you using S2I? Building the container yourself with Docker/podman? And what's "var/config" relative to? When you say "the OpenShift contains", what do you mean.
In short, an application running in an OpenShift container has access to whatever files you add to the container. There is effectively no such thing as an "OpenShift directory": you have access to what is in the container (and what is mounted to it). The whole point of containers is that you are limited to that.
So yourquestion probably boils down to, "how do I add a config file into my container". That will, however, depend on where the file is. Most tools for building containers will grab the standard places you'd have a config file, but if you have it somewhere non-standard you will have to copy it into the container yourself.
See this Spring Boot guide for how to use a Dockerfile to do it yourself. See here for using S2I to assemble the image (what you'd do if you were using the developer console of OpenShift to pull dir. (In general S2I tends to do what is needed automatically, but if your config is somewhere odd you may have to write an assemble script.)
This Dockerfile doc on copying files into a container might also be helpful.
I recently deployed a Springboot application to AWS using Docker.
I'm going crazy trying to update my image/container. I've tried deleting everything, used and unused containers, images, tags, etc. and pushing everything again. Docker system prune, docker rm, docker rmi, using a different account too... It still runs that old version of the project.
It all indicates that there's something going on at the server level. I'm using PuTTY.
Help is much appreciated.
What do you mean by old container ? Is it some changes you did from some version control then didn't update on the container ?or just do docker restart container I'd
There's a lot to unpack here, so if you can provide more information - that'd be good. But here's a likely common denominator.
If you're using any AWS service (EKS, Farget, ECS), these are just based on the docker image you provide them. So if your image isn't correctly updated, they won't update.
But the same situation will occur with docker run also. If you keep pointing to the same image (or the image is conceptually unchanged) then you won't have a change.
So I doubt the problem is within Docker or AWS.
Most Likely
You're not rebuilding the spring application binaries with each change
Your Dockerfile is pulling in the wrong binaries
If you are using an image hosting service, like ECR or Nexus, then you need to make sure the image name is pointing to the correct image:tag combination. If you update the image, it should be given a unique tag and then that tagged image referenced by Docker/AWS
After you build an image, you can verify that the correct binaries were copied by using docker export <container_id> | tar -xf - <location_of_binary_in_image_filesystem>
That will pull out the binary. Then you can run it locally to test if it's what you wanted.
You can view the entire filesystem with docker export <container_id> | tar -tf - | less
I am using gitlab as repository and want to push my code on ec2 whenever any commit is done on gitlab. The gitlab CD/CI documentation states that I have to add a file .gitlab-ci.yml at the root directory of my repo. This is actually a problem for me because, I want project repo to have only code and not any configuration related info like build and deploy etc. Also when anybody clones the repo, they would have access to location where my code is pushed/deployed on ec2. Is there any work around for this problem ?
You'll need to use a gitlab-ci.yml filke to deploy your application. The file provides instructions and a pipeline "infrastructure" which, if properly configured, will build, test and automatically deploy your code.
If you are worried about leaking credentials, you should use the built-in instance variables to mask your important bits, like a "$SERVERNAME" or "$DB_PASSWORD" for instance.
Lastly, you can use the power of gitignore, in order to not publish all of your credentials or sensitive bits to your projects' servers or instances.
We have a number of web services, written in clojure, and we also have some internal shared dependencies that we keep in a private maven repo. Leiningen requires an encrypted credentials file and at the moment each of our developers has their own private keys that lein uses to decrypt the credentials at runtime. I'm attempting to migrate to containers to make deployment and onboarding easier, but right away I've run into the problem that lein run from inside the container can't access my gpg keys, which are of course outside the container. I managed to generate a key inside the container using docker run bash and encrypt the credentials using that, but that won't scale as I'd have to keep unencrypted credentials inside the project directory. I'm not sure what the best path forward is - how can I securely pull from the private repo?
Two ideas which keep credentials secret and do expose them to the target container:
Habitus to manage secret configuration for your build.
docker-volume-libsecret to mount secret data into a container.
I have an AMI which has configured with production code setup.I am using Nginx + unicorn as server setup.
The problem I am facing is, whenever traffic goes up I need to boot the instance log in to instance and do a git pull,bundle update and also precompile the assets.Which is time consuming.So I want to avoid all this process.
Now I want to go with a script/process where I can automate whole deployment process, like git pull, bundle update and precompile as soon as I boot a new instance from this AMI.
Is there any best way process to get this done ? Any help would be appreciated.
You can place your code in /etc/rc.local (commands in this file will be executed when server will be loaded).
But the best way is using (capistrano). You need to add require "capistrano/bundler" to your deploy.rb file, and bundle update will be runned automatically. For more information you can read this article: https://semaphoreapp.com/blog/2013/11/26/capistrano-3-upgrade-guide.html
An alternative approach is to deploy your app to a separate EBS volume (you can still mount this inside /var/www/application or wherever it currently is)
After deploying you create an EBS snapshot of this volume. When you create a new instance, you tell ec2 to create a new volume for your instance from the snapshot, so the instance will start with the latest gems/code already installed (I find bundle install can take several minutes). All your startup script needs to do is to mount the volume (or if you have added it to the fstab when you make the ami then you don't even need to do that). I much prefer scaling operations like this to have no dependencies (eg what would you do if github or rubygems have an outage just when you need to deploy)
You can even take this a step further by using amazon's autoscaling service. In a nutshell you create a launch configuration where you specify the ami, instance type, volume snapshots etc. Then you control the group size either manually (through the web console or the api) according to a fixed schedule or based on cloudwatch metrics. Amazon will create or destroy instances as needed, using the information in your launch configuration.