Apparently i have a basic spring boot application that i need to deploy in openshift. The openshift contains aan application.yml/application.properties in it in a var/config directory. After deployment of the application in openshift, I need to read the propert/yml file from the directory in my application. Is there any process how to do the same?
How are you building your container? Are you using S2I? Building the container yourself with Docker/podman? And what's "var/config" relative to? When you say "the OpenShift contains", what do you mean.
In short, an application running in an OpenShift container has access to whatever files you add to the container. There is effectively no such thing as an "OpenShift directory": you have access to what is in the container (and what is mounted to it). The whole point of containers is that you are limited to that.
So yourquestion probably boils down to, "how do I add a config file into my container". That will, however, depend on where the file is. Most tools for building containers will grab the standard places you'd have a config file, but if you have it somewhere non-standard you will have to copy it into the container yourself.
See this Spring Boot guide for how to use a Dockerfile to do it yourself. See here for using S2I to assemble the image (what you'd do if you were using the developer console of OpenShift to pull dir. (In general S2I tends to do what is needed automatically, but if your config is somewhere odd you may have to write an assemble script.)
This Dockerfile doc on copying files into a container might also be helpful.
Related
I have a gradle based Spring Boot 3 application. I use the bootBuildImage gradle task in circleci to build a docker image of this application.
Now, I want to add NewRelic to this application. I know I can do it by writing my own Dockerfile but I want to do it by configuring the bootBuildImage gradle task.
I saw that I can add buildPacks like this:
tasks.named("bootBuildImage") {
buildpacks = [...]
}
And it appears that NewRelic has a buildpack here.
How can I generate the docker image with NewRelic integration?
Bonus: I need to inject environment variable as NEW_RELIC_ENABLE_AGENT=true|false. How can I do it?
You're on the right track. You want to use the New Relic Buildpack that you found.
High-level instructions for that buildpack can be found here. It essentially works by taking in bindings (the secret config data) and the buildpack securely maps those values to the standard New Relic agent configuration properties (through env variables).
An example of an APM tool configured through bindings can be found here. The specific example is using a different APM tool, but the same steps will work with any APM tool configured through bindings, like New Relic.
For your app:
Create a bindings directory. The root of your project is a reasonable place, but the path doesn't ultimately matter. Don't check in binding files that contain secret data :)
In the folder, create a subfolder called new-relic. Again, the name doesn't really matter.
In the folder from the previous step, create a file called type. The name does matter. In that file, write NewRelic and that's it. Save the file. This is how the buildpack identifies the bindings.
In the same folder, you can now add additional files to configure New Relic. The name of the file is the key and the contents of the file are the value. When your app runs, the buildpack will read the bindings and translate these to New Relic configuration settings in the form NEW_RELIC_<KEY>=<VALUE>. Thus if you read the New Relic docs and see a property called foo, you could make a file called foo set the value to bar and at runtime, you'll end up with an env variable NEW_RELIC_foo=bar being set. The New Relic agent reads environment variables for it's configuration, although sometimes it's not the first way that's mentioned in their docs.
Next you need to configure your build.gradle file. These changes will tell bootBuildImage to add the New Relic buildpack and to pass through your bindings.
In the tasks.named("bootBuildImage") block, add buildpacks = ["urn:cnb:builder:paketo-buildpacks/java", "gcr.io/paketo-buildpacks/new-relic"]. This will run the standard Java buildpack and then append New Relic onto the end of that list. Example.
Add a bindings list. In the same tasks.named("bootBuildImage") block add bindings = ["path/to/local/bindings/new-relic:/platform/bindings/new-relic"]. This will mount path/to/local/bindings/new-relic on your host to /platform/bindings/new-relic in the container, which is where the buildpack expects bindings to live. You will need to change the first path to point to the local bindings you created above (you can probably use a Gradle variable to the project to reference them, but I don't know if off the top of my head). Don't change the path on the container side, that needs to be exactly what I put above.
Run your build. ./gradlew bootBuildImage. In the output, you should see the New Relic buildpack pass detection (it passes if it finds the type file with NewRelic as the contents) and it should also run and contribute the New Relic agent as is described in the buildpack README.md.
After a successful build, you'll have the image. The key to remember is that bindings are not added to the image. This is intentional for security reasons. You don't want secret binding info to be included in the image, as that will leak your secrets.
This means that you must also pass the bindings through to your container runtime when you run the image. If you're using Docker, you can docker run --volume path/to/local/bindings/new-relic:/platform/bindings/new-relic ... and use the same paths as build time. If you're deploying to Kubernets, you'll need to set up Secrets in K8s and mount those secrets as files within the container under the same path as before /platform/bindings/new-relic. So you need to make a type file, /platform/bindings/new-relic/type, and files for each key/value parameter you want to set.
At some point in the future, we're working to have all of the APM buildpacks included in the main Java buildpack by default. This would eliminate the first config change in step #5.
Because managing bindings can be kind of a pain, I also have a project called binding-tool that can help with steps 1-3. It allows you to easily create the binding files, like bt add -t NewRelic -p key1=val1 -p key2=val2. It's not doing anything magic, just creates the files for you, but I find it handy. In the future, I want it to generate the Kubernetes YAML as well.
We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.
Lots of questions seem to have been asked about getting multiple projects inside a single solution working with docker compose but none that address multiple solutions.
To set the scene, we have multiple .NET Core APIs each as a separate VS 2019 solution. They all need to be able to use (as a minimum) the same RabbitMQ container running locally as this deals with all of the communication between the services.
I have been able to get this setup working for a single solution by:
Adding 'Container orchestration support' for an API project.
This created a new docker-compose project in the solution I did it for.
Updating the docker-componse.yml to include both a RabbitMQ and MongoDb image (see image below - sorry I couldn't get it to paste correctly as text/code):
Now when I launch all new RabbitMQ and MongoDB containers are created.
I then did exactly the same thing with another solution and unsurprisingly it wasn't able to start because the RabbitMQ ports were already in use (i.e. it tried to create another new RabbitMQ image).
I kind of expected this but don't know the best/right way to properly configure this and any help or advice would be greatly appreciated.
I have been able to compose multiple services from multiple solutions by setting the value of context to the appropriate relative path. Using your docker-compose example and adding my-other-api-project you end up with something like:
services:
my-api-project:
<as you have it currently>
my-other-api-project:
image: ${DOCKER_REGISTRY-}my-other-api-project
build:
context: ../my-other-api-project/ <-- Or whatever the relative path is to your other project
dockerfile: my-other-api-project/Dockerfile
ports:
- <your port mappings>
depends_on:
- some-mongo
- some-rabbit
some-rabbit:
<as you have it currently>
some-mongo:
<as you have it currently>
So I thought I would answer my own question as I think I eventually found a good (not perfect) solution. I did the following steps:
Created a custom docker network.
Created a single docker-compose.yml for my RabbitMQ, SQL Server and MongoDB containers (using my custom network).
Setup docker-compose container orchestration support for each service (right click on the API project and choose add container orchestration).
The above step creates the docker-compose project in the solution with docker-compose.yml and docker-compose.override.yml
I then edit the docker-compose.yml so that the containers use my custom docker network and also specifically specify the port numbers (so they're consistently the same).
I edited the docker-compose.override.yml environment variables so that my connection strings point to the relevant container names on my docker network (i.e. RabbitMQ, SQL Server and MongoDB) - no more need to worry about IPs and when I set the solution to startup using docker-compose project in debug mode my debug containers can access those services.
Now I can close the VS solution and go to the command line and navigate to the solution folder and run 'docker-compose up' to start the container.
I setup each VS solution as per steps 3-7 and can start up any/all services locally without the need to open VS anymore (provided I don't need to debug).
When I need to debug/change a service I stop the specific container (i.e. 'docker container stop containerId' and then open the solution in VS and start it in debug mode/make changes etc.
If I pull down changes by anyone else I re-build the relevant container on the command line by going to the solution folder and running 'docker-compose build'.
As a brucey bonus I wrote PowerShell script to start all of my containers using each docker-compose file as well as one to build them all so when I turn on my laptop I simply run that and my full dev environment and 10 services are up and running.
For the most part this works great but with some caveats:
I use https and dev-certs and sometimes things don't play well and I have to clean the certs/re-trust them because kestrel throws errors and expects the certificate to be trusted, have a certain name and to be trusted. I'm working on improving this but you could always not use https locally in dev.
if you're using your own nuget server like me you'll need to a Nuget.config file and copy that as part of your docker files.
For running docker build of my Java application I need sensitive information (a password to access the nexus maven repository).
What is the best way to make it available to the docker build process?
I thought about adding the ~/.m2/settings.xml to the container but it lies outside of the current directory/context and ADD is not able to access it.
UPDATE: in my current setup I need the credentials to run the build and create the image, not when running the container later based on the created image
You probably want to look into mounting a volume from HOST into the container
I have a few resources (log files, database files, separate configuration files, etc.) that I would like to be able to access from my OSGi bundles. Up until now, I've been using a relative file path to access them. However, now my same bundles are running in different environments (plain old Felix and Glassfish).
Of course, the working directories are different and I would like to be able to use a method where the directory is known and deterministic. From what I can tell, the working directory for Glassfish shouldn't be assumed and isn't spec'ed (glassfish3/glassfish/domains/domain1/config currently).
I could try to embed these files in the bundle themselves, but then they would not be easily accessible. For instance, I want it to be easy to find the log files and not have to explode a cached bundle to access it. Also, I don't know that I can give my H2 JDBC driver a URL to something inside a bundle.
A good method is to store persistent files in a subdirectory of the current working directory (System.getProperty("user.dir") or of the users home directory (System.getProperty("user.home"))
Temporary and bundle specific files should be stored in the bundle's data area (BundleContext.getData()). Uninstalling the bundle will then automatically clean up. If different bundles need access to the same files, use a service to pass this information.
Last option is really long lived critically important files like major databases should be stored in /var or Window's equivalent. In those cases I would point out the location with Config Admin.
In general it is a good idea to deliver the files in a bundle and expand them to their proper place. This makes managing the system easier.
You have some options here. The first is to use the Configuration Admin service to specify a configuration directory, so you can access files if you have to.
For log files I recommend Ops4J Pax Logging. It allows you to simply use a logging API like slf4j and Pax Logging does the log management. It can be configured using a log4j config.
I think you should install the DB as a bundle too. For example I use Derby a lot in smaller projects. Derby can simply be started as a bundle and then manages the database files itself. I'm not sure about h2 but I guess it could work similarly.