We have a project where we need to build quarkus app in a container and we cannot have a container within container. So I need a way to create the runner app without using the native image/docker. Can someone please suggest how can we achieve the same?
Related
Currently my team maintains many spring boot microservices. When running them locally, our workflow is to open a new IntelliJ IDEA window and pressing the "run" button for each microservice. This does the same thing as typing gradle bootRun. At a minimum each service depends on a config server (from which they get their config settings) and a eureka server. Their dependencies are specified in a bootstrap.yml file. I was wondering if there is a way to just launch one microservice (or some script or run configuration), and it would programatically know which dependencies to start along with the service I am testing? It seems cumbersome to start them the way we do now.
If you're using docker then you could use docker compose to launch services in a specific order using the depends_on option. Take a look here and see if that will solve your problem.
https://docs.docker.com/compose/startup-order/
I have a Spring Boot web application which is currently deployed Google App Engine. Now I am shifted to Docker and want to deploy the docker image of this application on to App Engine.
So far, I could not find any document related to this. Most of the documents explain how to deploy a docker image of Spring boot on Tomcat. Is there any way to achieve this?
First you need App Engine using the flexible environment , if you want deploy by docker image.
Here is the document Building Custom Runtimes.
A custom runtime allows you to use an alternate implementation of any supported App Engine flexible environment language, or to customize a Google-provided one. It also allows you to write code in any other language that can handle incoming HTTP requests (example). With a custom runtime, the App Engine flexible environment provides and manages your scaling, monitoring, and load balancing infrastructure for you, so you can focus on building your application.
In official case they have their sample DockeFile by jetty. But you can ignore the jetty part, just make your spring boot application executable ,and run it.
FROM gcr.io/google-appengine/jetty
ADD test-webapp-1.0-SNAPSHOT.war $JETTY_BASE/webapps/root.war
WORKDIR $JETTY_BASE
RUN java -jar $JETTY_HOME/start.jar --approve-all-licenses --add-to-startd=jmx,stats,hawtio
&& chown -R jetty:jetty $JETTY_BASE
Hopefully this helps:
https://github.com/GoogleCloudPlatform/getting-started-java/tree/master/helloworld-springboot
One compelling benefit with Docker containers is that, when the containers works on one runtime (e.g. Tomcat), it should be relatively straighttforward to swap in a different runtime (e.g. App Engine).
NB App Engine Flexible is the specific service that you want. It is similar to App Engine Standard but it schedules containers for you.
The primarily requirement for a container (image) to work with App Engine Flexible is that the container expose an HTTP endpoint on port 8080. As long as your container meets this obligation, you can run anything within it.
I have a library that relies on exporting a sun.reflect package from JRE.properties. During testing I have been manually adding this. What can I do to ensure this is automatically added within Apache Karaf?
Changes to the etc/jre.properties requires a container restart. If you are deploying this Karaf instance inside a Linux container (aka Docker), you would simply include this change as part of the linux container image build.
However, if you are deploying into a Virtual Machine environment, you'd want to make this part of your organization's custom build of Karaf. I suggest using a Maven project with the Assembly plugin to apply all your organization's changes-- ldap, security, ssl certs, etc/jre.properties... etc. It would then create a new .tar.gz or .zip file that and you would deploy your app into the modified Karaf instance.
There is an example in the HYTE Runtime build here:
HYTE Runtime
Technically, you could leverage the feature deployment mechanism to deploy an updated file, but this won't cause the Karaf instance to restart.
I have a multi-modules vertx application deployed on OpenShift. For integration testing purposes, I would like to deploy a database container with pre-defined data, and destroy it when the test is finished.
How can I achieve this ?
My application uses junit and maven fabric8 plugin to deploy containers in Openshift.
This is something that could be done relatively easy using arquillian-cube, which does support Kubernetes and Openshift.
What arquillian-cube can do for you, is to (optionally) create an ephemeral project, deploy everything you need for your test and once everything is up and running, then start your tests. In the end it can also do the cleaning up for you.
It is quite flexible so according to your needs and requirements it can work with either ephemeral or fixed projects. And also there are pletny of configuration options when it comes to cleaning up.
Last but not least, it does play quite nicely with the fabric8 maven plugin.
https://github.com/arquillian/arquillian-cube/blob/master/docs/kubernetes.adoc
I have build a springboot application and containerized it. I have two ways to inject configurations to the service.
As part of the code(hard coded) in application.properties file with
multple profiles and my Dockerfile only accepts one variable for
-Dspring.profiles.active=${environment} as part of CMD to start the app container
exp - applciation.properties:
spring:
profiles: dev
spring:
profiles: prod
Load properties file to the host machine running app and inject to
container while start.
exp: docker run -d --env-file=environment(dev).properties myapp:latest
I would like to know what is the best way industry does to inject properties in an microservice app with advantages and disadvantages.
Do you keep configurations close to app?
OR You prefer to inject it as a dependency while app starts?
My understanding: I prefer configurations closer to container as I can have minimal dependency however a small change will warrant a new build and deploy
The second option has advantage as the app code(image) do not require a change and you can inject the updated configuration with a container restart.
In my company we go to the first solution, however, I am not sure if it is an industry standard or not. The main reason is that it is very unlikely for us to change the configuration after building the docker container.
Also, if you build different containers for different environments, passing -Dspring.profiles.active=${environment} parameter to the container run command is not very smart (it is always Prod for the production container). Instead, in dockerFile, you can just copy the appropriate environment.properties.