I have a spring boot project with 4 microservices (Eureka service registry, Config server, a Zuul gateway and a userservice) in one repository with a parent project where I have a docker-compose.yml which reads the Dockerfiles in the microservices project and uses the "application-docker.yml" and "bootstrap-docker.yml"
What I'd like to do is to trigger a jenkins pipeline after a commit in git so that it will compile and deploy the microservices in Docker. Eventually I'd like to have a production configuration that deploys the images in Kubernetes maybe AWS.
Now, in order to work, the microservices need to start in order:
configserver
eureka service registry
gateway , etc..
What is the best practise?
If I have separate repositories per microservice, I think I can figure it out. It should be easy to deploy a single microservice assuming that configserver and eureka service registry are already up and running, in reality they should never change.
If I have a single repository, and I keep developing new microservices, do I need to have separate jenkins file per microservices or can I have a jenkinsfile in the parent project and use docker-compose?
How does it work? Any articles online that can help (couldn't find any). Does it make sense?
Or do I need to look at Jenkins X ?
Thanks!
I would recommend using separate repositories for each microservice. You use microservices to prevent monoliths and have small well-defined services; it only seems appropriate to also separate them by space i.e. store them in separate repositories (making it for example easier to reuse one).
You would then have to provide a Jenkinsfile in each repo. These would be mostly identical.
If you want fast release cycles you could automatically deploy a single service upon release.
Alternatively you could use an additional release train module that handles the full deployment.
In both cases I would use a docker-compose file that handles the interconnection between the services.
You can enforce the right order by using 'depends_on, links, volumes_from, and network_mode: "service:..."'. For a full reference see the docker documentation.
If you want to keep your single repository your Jenkinsfile(s) would have to be quite hacky, I suppose... After each commit you would either
build all modules --> monolithic behaviour
somehow determine which modules have changed (e.g. looking at the git log) --> same behaviour as with multiple modules but very hackily
The Docker-Compose File
If you want to release all modules at a specific point of time you could use a Release Train module where the docker-compose.yml resides next to a Jenkinsfile. Then when you want to ship your application you can start this Jenkins-job.
If you want to ship each service as soon as it is released, independently from the others, you would need to access the docker-compose.yml from each module. You could do this manually (since the files won't change too often) or create a docker module that you use as a git-submodule in all your services.
We use a generic docker-compose.yml for this, where every version is replaced by a variable:
example-service:
image: example.service:${EXAMPLE_SERVICE_VERSION}
Then to start that specific service in jenkins we use the command
export EXAMPLE_SERVICE_VERSION=1.1.1
docker-compose -p example-project -f docker-compose.yml up -d example-service
Related
I'd like to package spring dataflow server into a container which will contain one local jar application. Publish this into local repo, expectation is that end result is same as the normal dataflow server:
https://hub.docker.com/r/springcloud/spring-cloud-dataflow-server
just with the local jar added.
Creating the Dockerfile to include the jar is straightforward, but I'm strugling a bit with how to register the jar into dataflow server.
I know one option is to use the RESTapi, but it feels quite complicated to start the dataflow server during the docker creation. I found documentation that application.yml might be a way to do this as well, but couldn't figure out how exactly.
https://github.com/spring-cloud/spring-cloud-dataflow/blob/main/spring-cloud-dataflow-server/README.adoc
https://docs.spring.io/spring-boot/docs/1.5.13.RELEASE/reference/html/boot-features-external-config.html
So is there a straightforward way to package a jar into dataflow server docker container?
The API is the only practical way to do it. Take a look at how we register apps with the docker-compose installation.
Technically, you could also pre-populate the associated DB table(s), but I don’t recommend this.
I am using Spring Cloud Config Server first time and have a basic query.
Spring Config server externalises the configuration to a separate git repository.
Why would I create a separate repository just for the configurations?
Is not it advisable to have mono repository with all application code and configurations in a single repo than creating a separate one just for configurations.
We have multiple micro services all present in the same repository. Should not the config server to be one of the micro service present in the same repository where the other application code is?
So, in my multi-module gradle project, I can make config-server as one of the module and give the same repository name as git backed url in config-server. Is this advisable? If yes, where should I keep the configurations in config-server? Inside resources?
Thank you.
When working with microservices it is advisible to have one repository for each microservice. The config server is a microservice as well, therefore it should be put in a separate repository.
Each microservice should have its own independent code repository and your application configuration should never be in the same repository as your source code.
You can read more about this here: Heroku's The Twelve-Factor App. Here you can find 12 best practices to use when building microservices, but for this question I recommend looking at
1st factor: The codebase
3rd factor: The config
Does anyone have experience with running Spring Cloud Micro Services in production?
I have about 10 micro services within one big Maven multi-project. This means that all the source code is under a single repo. The advantages are that it is easy to check out and manage the project in my IDE.
My concern is with CI as it means that the build server will need to figure out which services changed and deploy them respectively. Alternatively all the micro services will be deployed every time a change is committed.
I'm eyeing exploding the project into a source repo per micro service (my gut feel is that this is the correct approach). This way only the affected service is deployed.
My goal is to run every micro service as a container on Kubernetes.
Do you have any advise / tips / concerns that you can raise that might be an unwelcome surprise down the line?
More important than separating codebase is to create a separate CI/CD pipeline per microservice (running its own unit and integration tests), and another pipeline to integrate all microservices and run end-to-end tests.
Since you plan to release your microservices as Docker containers, consider a Docker Registry for versioning and distribution of your images.
Here are some references that follow this idea: DZone, Microsoft
This question is about running a Kubernetes deployment on Azure and we are completely new to Kubernetes. We have a new microservice called xde-deployer that we use to deploy projects onto other microservices. The xde-deployer builds projects using Maven and therefore needs a working Maven configuration. Normally this is provided in the user's settings.xml. In this case we are running it in a docker container, so xde-deployer will look for it in /root/.m2/settings.xml.
Normally when we deploy the docker container, we use a volume to pass the settings.xml which is located on the host. On Kubernetes of course this is not so straightforward. As the answers to this question state one could add the file later, or use a configMap. Both answers are a bit too vague for our purposes though. Is there really no way to do this from the deployment? I cannot imagine we are the only ones who need to pass Maven settings to jobs running on Kubernetes. How are others solving this problem? My main problem after reading the documentation is still: how do I get a file into the Kubernetes cluster at all when I am on Azure? Is there a kind of persistent volume or parameters store that can be easily shared by the pods?
I have build a springboot application and containerized it. I have two ways to inject configurations to the service.
As part of the code(hard coded) in application.properties file with
multple profiles and my Dockerfile only accepts one variable for
-Dspring.profiles.active=${environment} as part of CMD to start the app container
exp - applciation.properties:
spring:
profiles: dev
spring:
profiles: prod
Load properties file to the host machine running app and inject to
container while start.
exp: docker run -d --env-file=environment(dev).properties myapp:latest
I would like to know what is the best way industry does to inject properties in an microservice app with advantages and disadvantages.
Do you keep configurations close to app?
OR You prefer to inject it as a dependency while app starts?
My understanding: I prefer configurations closer to container as I can have minimal dependency however a small change will warrant a new build and deploy
The second option has advantage as the app code(image) do not require a change and you can inject the updated configuration with a container restart.
In my company we go to the first solution, however, I am not sure if it is an industry standard or not. The main reason is that it is very unlikely for us to change the configuration after building the docker container.
Also, if you build different containers for different environments, passing -Dspring.profiles.active=${environment} parameter to the container run command is not very smart (it is always Prod for the production container). Instead, in dockerFile, you can just copy the appropriate environment.properties.