Spring Cloud Contract: common repository for all microservices - spring

I am working on a microservice architecture system. I would like to setup a common repository with all the contracts of our microservices, using a consumer driven approach. I am taking this sample as an example:
https://github.com/spring-cloud/spring-cloud-contract/tree/2.1.x/samples/standalone/contracts
No problems setting up a test on the consumer side.
But while setting up the producer to get the contracts, I realized that the contracts-0.0.1-SNAPSHOT.jar contains all the contacts of all producers in the contracts project. This sample project sample is simple and only contains one producer: com.example:server.
What happens when more producers contracts are added like com.example:my-supe-service? From what i understood, contracts-0.0.1-SNAPSHOT.jar will contain all the contracts for both server and my-super-service applications.
Does it make sense all producers get all universe of contracts by the means of this jar? How can we restrict the installed contracts to be to the service in question? for instance com.example:my-supe-service only to see its respective contracts?
Thank you so much for your help

Actually, the producer will only get the contracts that match its own group-id:artifact-id. Thank you anyway :)

Related

Where to define RabbitMQ Queue, Exchange Properties in MicroService Architecture?

in my current project we are using Spring Boot and RabbitMq for some of the internal microservice Communication.
We are currently defining Queue Properties in both services, that publish/listen to this queue. Additionally, we define the exchange only in the publisher service.
However, to make it more maintainable, I would like to find a setup/best practice to define the queue once and all relevant services can rely on it.
So far, i checked the AsyncAPI Project and considered creating an extra library to outsource the configs there.
What is the best practice here or how do you do it in your projects?
So far, i checked the AsyncAPI Project and considered creating an extra library to outsource the configs there.
In my opinion, there isn't a general good practice. you can try to define YOUR practice with your team; you can try a strategy and fail fast.
If your system is "static" (i hope not) you can define the configs with definitions file. I dont like this, but is a possible approach. You could try this solution for DEV ENV with docker
In our team, each workload define its configs and rabbitmq team create VHOST and its configs with devops approach.

Using multiple file suppliers in Spring Cloud Stream

I have an application with file-supplier Srping Cloud module included. My workflow is like track file creating/modifying in particular directory->push events to Kafka topic if there are such events. FileSupplierConfiguration is used to configure this file supplier. But now I have to track one more directory and push events to another relevant Kafka topic. So, there is an issue, because there is no possibility to include multiple FileSupplierConfiguration in project for configuration another file supplier. I remember that one of the main principles of microservices for which spring-cloud-stream was designed for is you do one thing and do it well without affecting others, but it still the same microservice with same tracking/pushing functionality but for another directory and topic. Is there any possibility to add one more file supplier with relevant configuration with file-supplier module? Or is the best solution for this issue to run one more application instance with another configuration?
Yes, the best way is to have another instance of this application, but with its own specific configuration properties. This is really how these functions have been designed: microservice based on the convention on configuration. What you are asking really contradicts with Spring Boot expectations. Imaging you'd need to connect to several data bases. So, you only can have a single JdbcTemplate auto-configured. The rest is only possible manually. But better to have the same code base which relies on the auto-configuration and may apply different props.

Different polling delay for different suppliers in Spring Cloud Stream Function

I'm trying to implement suppliers using Spring Cloud Function and Kafka. I need that one supplier should publish after every 10 secs and other should publish after every 30 secs. I could see from documentation, I can change delay using spring.cloud.stream.poller.fixed-delay property. Reference
But I need to set different delay for each topic. Is there any way to do it?
From the spring-cloud-function perspective there isn't any kind of polling as it is not the responsibility of the framework.
From the spring-cloud-stream perspective that uses spring-cloud-function indeed there is a mechanism that you have described. However, keep in mind that spring-cloud-stream is primarily designed to support concept of microservices (not your general messaging framework) and in microservices we embrace do one thing but do it well without affecting others approach. So having more then one supplier kind of goes against this model.
If you are building a general purpose messaging app, then i'd suggest to use Spring Integration framework which provides all the necessary hooks to accomplish what you need, but will require a bit more configuration details.

Separate microservice just for microservices orchestration?

I have a few microservices where each microservice has REST endpoints for CRUD operations.
I have to create a workflow that will start from one microservice with some initial input, but later outputs from a microservice can be used as input to other microservices. There can be some synchronous and asynchronous calls to these REST APIs.
I have looked for some of the workflow-engines but I do not think that I can create my workflow without writing any java code.
Should I write a separate microservice just for microservices orchestration? This orchestration microservice will know the exact workflow and can be configurable for inputs required to start the workflow, and it can also use some third-party workflow engines like Camunda to store the definition of the workflow.
Is this correct thinking to have a separate microservice just for microservices orchestration? Till now the existing microservices have no idea about the other microservices. There could be a chance that output from one microservice needs to be massaged before using as input for other microservice.
I have looked for some of the workflow-engines but I do not think that
I can create my workflow without writing any java code.
This depends on your business processes and the complexity of your workflow. Usually yes you will need to write some code to achieve it.
Should I write a separate microservice just for microservices
orchestration? This orchestration microservice will know the exact
workflow and can be configurable for inputs required to start the
workflow, and it can also use some third-party workflow engines like
Camunda to store the definition of the workflow.
Yes you can do that. I did something similar on a system using micro-services. This would be a very good Idea on the long run as you could configure your workflow based on environments as well. For example on your development machine you would have a little different workflow/configuration. This is practical for Developers or QA's testing their solutions. On the other hand on Staging/Production you can pre-define Customer setups/orchestration which you can reuse any time if you get new customers or users.
Is this correct thinking to have a separate microservice just for
microservices orchestration? Till now the existing microservices have
no idea about the other microservices. There could be a chance that
output from one microservice needs to be massaged before using as
input for other microservice.
Yes you can do that without problems although I would be careful with the name orchestration as this has another meaning in context in micro-service architecture(Docker, Docker-Swarm, Kubernetes). Similar examples would be some kind of EndToEndTest or Cross micro-service testing-micro-service. That would test cross micro-service business operations and assert the results. Usually business operations involve more then 1 micro-service so in order to test that you can use this approach. This micro-service would call APIs from multiple micro-services and test the results and scenarios based on your Business rules. Another example would be something like seeder-micro-service(which seems to be very similar to what you are trying to do here). This seeder-micro-service would be responsible for seeding(creating) test data to your micro-services. This test data is some basic setup/configuration data which you need in order to have your micro-service business processes to work. This is very handy for development machines or some test environments where you need to quickly setup an environment. Using this seeder-micro-service you can easily setup do your work or tests and dispose the environment(data) as you need it. This is especially useful for development machines setups but it can also be used on shared test environments and etc. Both of those examples are micro-services which server your needs and make your life easier to work with your system.
One final note regarding this:
Till now the existing microservices have no idea about the other
microservices.
They should be abstracted from each other in a way that they are not aware of internal implementation or data(separate databases) but they should communicate between each other in order to perform business operations which sometimes are cross micro-services. Like the typical example of payment-micro-service and order-micro-service from an online shop example. So it is fine that they know about each other and communicate but this communication has to be very carefully designed in order to avoid some common pitfalls.
They usually communicate with each other with direct calls over HTTP or some other protocol or through some message queue like Apache Kafka or RabbitMq or others. You can read more about it in this answer.
Yes, you should cover the orchestration part in a separate service. And, yes, go with a BPMN 2 process engine as orchestrator, as you already suspected. Yes, this may include writing a little code mostly for data mapping or connectors.
Benefits include for instance ootb support for:
state management and long running processes / persistence for data
versioning (!)
retries and error handling
tooling to modify state and date in case something went wrong
timeouts, parallel execution (if necessary)
scalability
graphical process model
audit trail
and end to end visibility in monitoring tools based on BPMN 2 model
ability to include business rules tasks (DMN) for more complex rules
combination of push and pull communication pattern and a/sync communication
business-IT alignment via BPMN 2
support for the various BPMN 2 events
standardization (skills, security, software quality, features)
...
This is a great related article about the WHY using an airline ticket booking as an example:
https://blog.bernd-ruecker.com/3-common-pitfalls-in-microservice-integration-and-how-to-avoid-them-3f27a442cd07
This is about the important design consideration in case you go with a process engine:
https://blog.bernd-ruecker.com/the-microservice-workflow-automation-cheat-sheet-fc0a80dc25aa
I faced a similar problem as in the original question. I'd a few microservices with simple dependencies that needed to be managed and did go down the path of writing my own microservice https://github.com/pedro-r-marques/workflow to manage the dependencies and do the orchestration. It uses a yaml definition file to describe the dependencies and rabbitmq for message passing. One could also replace the usage of rabbitmq by REST API calls front-ended by a load-balancer.

Recommendations for microservice code / modules

I am new to microservices and am learning Spring Boot
I have IntelliJ Ultimate and am wondering how best to structure my microservice code
For the system that i am building that will have a few microservices should I..
Open 1 IntelliJ containing a module for each Spring Boot microservice
Open multiple instances of IntelliJ and have one Spring Boot microservice per instance of IntelliJ
I think it will be tricky to do 2 if I have a lot of microservices but I do not know if IntelliJ is able to have multiple Spring Boot microservices running at the same time in one instance of IntelliJ
Any advice on how you work with microservice code / projects would be appreciated.
InteliJ is able to have a lot of Spring Boot microserices/application running at the same time. It have no impact on performance, even if you run all microvervices in debug mode. I would choose first option. This is an approach, we are using in our poject:
Start new project in IntelliJ.
For each microservice create new module in InteliJ.
Benefits:
If someone new want to work on that project, he can import one
project and have all microservices imported at once. Even if he would
work only in few of them, he can look how others are build.
in InteliJ you can create Run Configuration as "Compaund". So, when you want to run all your services at once, you can do it with just one click.
But, if you have 1500 employees, want go full netflix way and create 500 microservices, then better way will be to keep them separate :)
One of the benefits of microservices is organisational scalability. You have several teams working on different microservices at the same time. Also, it should be possible to develop, build and run each microservice on its own. If you don't have those goals, you get all the complexities of microservices and very little of the advantages.
In such a scenario, each microservice should live in its own version control. As long as this is given, if the IDE supports multiple version control sources per module, go for 2). Otherwise I guess you must stick with 1).

Resources