spring-cloud-stream-binder-kinesis AWS - spring

How can we have two AWS kinesis connections using spring-cloud-stream-binder-kinesis?
1st connection: spring application and AWS kinesis stream in the same AWS account.
2nd connection: other AWS kinesis stream sitting in a different AWS account.
Is it possible to have two different connections from a spring application to two different kinesis streams in different AWS accounts?
If it is yes, How do we implement this?

See Connecting to Multiple Systems.
By default, binders share the application’s Spring Boot auto-configuration, so that one instance of each binder found on the classpath is created. If your application should connect to more than one broker of the same type, you can specify multiple binder configurations, each with different environment settings.

Related

Cross Region Events Routing and Spring Cloud Stream & Spring Cloud Data Flow

I am using AWS as a Cloud provider. I have a Microservice that is in Frankfurt Region and will publish events to a Kinesis Data stream in the same region using Spring Cloud Stream (SCDF) Kinesis Adapter. I have multiple Microservices in different regions (Oregon, Ohio, Singapore, Mumbai etc) which is consuming events from respective Kinesis Streams in the respective regions using Spring Cloud Stream (SCDF) Kinesis Adapter. Now I have to route the Events which are there in the Frankfurt Kinesis to different Data Streams in different regions (only related to respective Kinesis).
Can I do this using any of the Spring provided functionality? Can I use Spring Cloud Stream or SCDF to do cross-region routing? if yes, please point to some examples.
If #1 is not possible what are the best ways to do this?
I read about AWS EventBridge, is it a correct choice for the above use case?
Spring Cloud Stream Binder for AWS Kinesis is fully based on a standard AWS Client or KCL. Both of them are require particular region to be configured staticaly or resolved from the EC2 environment. So, to be able to consume from one region and relay stream records to another, you have to code some "replicator" stream application.
Luckily Spring Cloud Stream application can be configured for several binders. Right, in our case both of them are going to be the same Kinesis binder, but we are going to configure them for different credentials and different regions.
See Spring Cloud Stream docs for multi-binder configuration: https://docs.spring.io/spring-cloud-stream/docs/3.1.2/reference/html/spring-cloud-stream.html#multiple-binders.
Probably the code of your Stream Application could be just a plain identity function:
#Bean
public Function<byte[], byte[]> kinesisStreamRelay() {
return Function.identity();
}
And you bind it for an in destination from one Kinesis binder and out destination in the other.
Also see other ways to do that in this article: https://engineering.opsgenie.com/cross-region-replication-of-kinesis-streams-4a62f3bb269d
See Spring Cloud Function support for AWS Lambda: https://docs.spring.io/spring-cloud-function/docs/3.1.1/reference/html/aws.html. Spring Cloud Stream does not provides binder implementation for AWS Lambda.

Deploy spring boot microservices on AWS Appmesh with EC2

I am trying to deploy Spring Boot microservices using Docker using Appmesh and EC2. I have deployed two sample microservices (https://github.com/amitgct/appmesh-hello) namely: caller-service and called-service using docker on a single EC2 instance and configured appmesh accordingly by following guide https://docs.aws.amazon.com/app-mesh/latest/userguide/getting-started-ec2.html. Currently, my applications are running on ec2 but they cannot communicate with each other and getting error on calling called-service from caller-service i.e. Unknown host. Can anyone tell me how can I specify hostname and register service with that host on EC2 and App mesh. (Note: I don't want to use kubernetes, ECS, AWS cloud map, AWS Route53) . If can provide example also then very thankful to you. Please help.
https://www.appmeshworkshop.com/servicediscovery/
here's a step by step process shown, and this is for http protocol...
but if you change the listeners section in virtual routes to tcp then it should work for TCP messages as well - for those systems which works on tcp protocol - example Akka Clusters

Archaius Configuration Server setup

We are exploring Archaius for our microservices. We want to setup a Configuration Server as microservice and store configuration files of other microservices.
We have other microservices (Springboot based) say
a) Producer
b) Consumer
deployed in different environment/vm. We also deploy all these microservices as a cluster (ie., run multiple instances of Producer and Consumer) to support high availability.
Please let us know how to get dynamically changed values in Configuration Server to be available to other microservices (multiple Producer and Consumers).
Thanks

Spring Cloud Data Flow Remote RabbitMQ Server Config

I am new to SCDF and am trying to get started with a RabbitMQ transport layer and SCDF version 1.2.2. I have setup RabbitMQ in a separate VM and have the SCDF local server and SCDF shell jar in one VM. Can someone suggest how I can specify the server details of my RabbitMQ (which is in a different host in the same network) for SCDF to use as a transport.
For reasons outside my control I need to use the MQ setup in a different machine. Please advise.
SCDF doesn't require RabbitMQ and I think you are trying to use RabbitMQ as the binder for your Spring Cloud Stream applications that are orchestrated via SCDF.
You would need to configure the properties mentioned here
You can find more information here on how to specify these properties at SCDF.

How to monitor streaming apps Inside SCDF?

I am novice to Spring Cloud Data flow and Stream Cloud Streaming Applications.
Currently my project diagram looks like following :
I route a POST request from outside client using zuul API gateway to a microservice called Composite. Composite creates a stream using REST POST and deployes onto Spring Cloud Data Flow Server. As far as I know the microservices mongodb and file run as co-existing JVM processes. If My client has to know the status of stream, status of the processed data, How should Composite Microservice interact with Spring Cloud Data Flow Server? Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server. Does SCDF expose any hooks to look at the individual apps? Also how can I change the flow #runtime to create a dynamic mesh?
Currently I am using Local Spring Cloud Data Flow Server for development.
Runtime platform is local
Local runtime is recommended only for development purpose and if you're preparing for production, please make sure to choose a platform variant (eg: cf, k8s, yarn, ..) that comes with non-functional requirements to support reliable and durable execution of all the applications running in streaming pipeline.
As far as I know the microservices mongodb and file run as co-existing JVM processes.
If your stream definition is file | mongodb, you'd have 2 different JVM's even when using Local runtime. They're independent Boot applications.
How should Composite Microservice interact with Spring Cloud Data Flow Server?
Not clear what you mean by "composite" here. All the microservice applications in SCDF communicate via messaging middleware such as Kafka or Rabbit. SCDF provides the orchestration capability to run such applications into various runtime platforms.
Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server
You can use SCDF's REST-APIs to query for current status of the apps and it is platform agnostic. You can view the list of supported APIs by hitting the root URL (see image below) - there's a gap in docs - we will fix it. Following APIs could be useful for status checks.
Does SCDF expose any hooks to look at the individual apps?
Once the apps are deployed in a runtime platform, you can take advantage of Boot's actuator endpoints to explore more details such as trace, metrics, health, env among others at each application level. See Boot's actuator endpoints for more details. For instance, if your mongodb app is running locally and on port 23000, then you can check granular metrics for this application at: http://localhost:23000/metrics.
[As an FYI: future SCDF releases would include integrating Spring Boot + Spring Cloud Sleuth metrics and visual representation of the same.]
Also how can I change the flow #runtime to create a dynamic mesh?
If you're referring to editing a running streaming pipeline with addition/deletes, we are currently exploring design approach to support this functionality.

Resources