Reset database container on openshift - maven

I have a multi-modules vertx application deployed on OpenShift. For integration testing purposes, I would like to deploy a database container with pre-defined data, and destroy it when the test is finished.
How can I achieve this ?
My application uses junit and maven fabric8 plugin to deploy containers in Openshift.

This is something that could be done relatively easy using arquillian-cube, which does support Kubernetes and Openshift.
What arquillian-cube can do for you, is to (optionally) create an ephemeral project, deploy everything you need for your test and once everything is up and running, then start your tests. In the end it can also do the cleaning up for you.
It is quite flexible so according to your needs and requirements it can work with either ephemeral or fixed projects. And also there are pletny of configuration options when it comes to cleaning up.
Last but not least, it does play quite nicely with the fabric8 maven plugin.
https://github.com/arquillian/arquillian-cube/blob/master/docs/kubernetes.adoc

Related

Can you use Testcontainers to manage service dependencies, like a database, during local development?

Testcontainers can manage dockerized service dependencies, like a database, Kafka, Elasticsearch, and so on for integration testing.
Can I configure my Spring Boot application to manage these service dependencies during local development?
For example, my Spring Boot application needs a MySQL database.
I would like to integrate it with Testcontainers to provide a Docker container with MySQL not only during the tests execution, but at application startup during local development too.
Testcontainers provides an API to manage applications and services in Docker containers. It's incredibly useful for integration testing, where having a programmatically configured, isolated, repeatable environments is an essential requirement for trustworthy tests.
Because of that Testcontainers has integrations with the frameworks like Spring and Quarkus, and tes frameworks like JUnit, Spock, etc to automatically tie the lifecycle of your containerized dependencies to the lifecycle of the tests.
However, Testcontainers API is generic and doesn't have to run during the tests. For example, Quarkus has a feature called Dev Services which automatically creates a container for your database (or other service dependencies, for example Kafka, Redis, etc) when your application tries to access the database, but the configuration is not present.
You can think about it like this, if you have the data access repository classes initialized and wired, but no datasource.url in the config -- it'll spin up the database using testcontainers and configure the app to use it (just like it would happen during tests, but instead used for local development).
Spring Boot doesn't have an automated feature like that currently, there's an open issue to investigate these local development setups with Testcontainers.
If you're open to manually add a feature for your particular application, you can look at the prototype linked from that issue here: https://github.com/joshlong/testcontainers-auto-services-prototype
It's a bit more involved because it integrates with the Spring DevTools, but here are the essential parts that need to be taken care of:
Check that you need to use the database (in your application it can be a given).
Verify the configuration to use the database is absent (if the database is already configured you don't need to spin up a new one)
Create a container using Testcontainers API, either using an appropriate module or the GenericContainer with any Docker image.
Provide the configuration back to the application. For the database that would be the jdbcUrl, username, password, database name, r2dbcUrl and any other relevant properties.
You can take a look at the video with Josh Long where this concept was tried: https://www.youtube.com/watch?v=1PUshxvTbAc&t=2450s
It would also work in the production environments, but the usefulness of the ephemeral Databases, might be limited.

how to create a pipeline in jenkins for spring boot microservices

I have a spring boot project with 4 microservices (Eureka service registry, Config server, a Zuul gateway and a userservice) in one repository with a parent project where I have a docker-compose.yml which reads the Dockerfiles in the microservices project and uses the "application-docker.yml" and "bootstrap-docker.yml"
What I'd like to do is to trigger a jenkins pipeline after a commit in git so that it will compile and deploy the microservices in Docker. Eventually I'd like to have a production configuration that deploys the images in Kubernetes maybe AWS.
Now, in order to work, the microservices need to start in order:
configserver
eureka service registry
gateway , etc..
What is the best practise?
If I have separate repositories per microservice, I think I can figure it out. It should be easy to deploy a single microservice assuming that configserver and eureka service registry are already up and running, in reality they should never change.
If I have a single repository, and I keep developing new microservices, do I need to have separate jenkins file per microservices or can I have a jenkinsfile in the parent project and use docker-compose?
How does it work? Any articles online that can help (couldn't find any). Does it make sense?
Or do I need to look at Jenkins X ?
Thanks!
I would recommend using separate repositories for each microservice. You use microservices to prevent monoliths and have small well-defined services; it only seems appropriate to also separate them by space i.e. store them in separate repositories (making it for example easier to reuse one).
You would then have to provide a Jenkinsfile in each repo. These would be mostly identical.
If you want fast release cycles you could automatically deploy a single service upon release.
Alternatively you could use an additional release train module that handles the full deployment.
In both cases I would use a docker-compose file that handles the interconnection between the services.
You can enforce the right order by using 'depends_on, links, volumes_from, and network_mode: "service:..."'. For a full reference see the docker documentation.
If you want to keep your single repository your Jenkinsfile(s) would have to be quite hacky, I suppose... After each commit you would either
build all modules --> monolithic behaviour
somehow determine which modules have changed (e.g. looking at the git log) --> same behaviour as with multiple modules but very hackily
The Docker-Compose File
If you want to release all modules at a specific point of time you could use a Release Train module where the docker-compose.yml resides next to a Jenkinsfile. Then when you want to ship your application you can start this Jenkins-job.
If you want to ship each service as soon as it is released, independently from the others, you would need to access the docker-compose.yml from each module. You could do this manually (since the files won't change too often) or create a docker module that you use as a git-submodule in all your services.
We use a generic docker-compose.yml for this, where every version is replaced by a variable:
example-service:
image: example.service:${EXAMPLE_SERVICE_VERSION}
Then to start that specific service in jenkins we use the command
export EXAMPLE_SERVICE_VERSION=1.1.1
docker-compose -p example-project -f docker-compose.yml up -d example-service

Spring Cloud Multiproject or Single Projects

Does anyone have experience with running Spring Cloud Micro Services in production?
I have about 10 micro services within one big Maven multi-project. This means that all the source code is under a single repo. The advantages are that it is easy to check out and manage the project in my IDE.
My concern is with CI as it means that the build server will need to figure out which services changed and deploy them respectively. Alternatively all the micro services will be deployed every time a change is committed.
I'm eyeing exploding the project into a source repo per micro service (my gut feel is that this is the correct approach). This way only the affected service is deployed.
My goal is to run every micro service as a container on Kubernetes.
Do you have any advise / tips / concerns that you can raise that might be an unwelcome surprise down the line?
More important than separating codebase is to create a separate CI/CD pipeline per microservice (running its own unit and integration tests), and another pipeline to integrate all microservices and run end-to-end tests.
Since you plan to release your microservices as Docker containers, consider a Docker Registry for versioning and distribution of your images.
Here are some references that follow this idea: DZone, Microsoft

Inject Maven settings.xml file into Kubernetes deployment

This question is about running a Kubernetes deployment on Azure and we are completely new to Kubernetes. We have a new microservice called xde-deployer that we use to deploy projects onto other microservices. The xde-deployer builds projects using Maven and therefore needs a working Maven configuration. Normally this is provided in the user's settings.xml. In this case we are running it in a docker container, so xde-deployer will look for it in /root/.m2/settings.xml.
Normally when we deploy the docker container, we use a volume to pass the settings.xml which is located on the host. On Kubernetes of course this is not so straightforward. As the answers to this question state one could add the file later, or use a configMap. Both answers are a bit too vague for our purposes though. Is there really no way to do this from the deployment? I cannot imagine we are the only ones who need to pass Maven settings to jobs running on Kubernetes. How are others solving this problem? My main problem after reading the documentation is still: how do I get a file into the Kubernetes cluster at all when I am on Azure? Is there a kind of persistent volume or parameters store that can be easily shared by the pods?

Jboss deploying a ear and war file in the same server instance

Is it possible in Jboss deploying a ear and war file in the same server instance? If so any good source to get started? Any suggestion.
I have two applications deployed under my default deploy folder of jboss. I am running one application from which i have to call the other application to get some data. Is this possible? If so how to get started??
It is actually recommended that you only put 1 application in a server instance as a time. That said, I have had multiple ears and wars running in the same instance while I am developing. But for production it is better to separate them out. Since there is no extra cost involved this makes it easier to observe and debug apps.
Do you have a specific reason for wanting them to run side by side?
What version of JBoss are you running.

Resources