I have a springboot application configured to use service bus queue binder to connect to A Service Bus queue. Recently I have upgraded "azure-spring-cloud-stream-binder-servicebus-queue" dependency from version 2.10.0 to 4.0.0
Referring to the version 4.0.0 github sample, it requires the application to explicitly perform checkpoint commit. According to Microsoft Event Hub documentation, it is recommended to use Azure Blob Storage to store the checkpoint for production workload. And I don't find any service bus checkpoint setting under Azure Service Bus namespace in this documentation.
My questions are:
Does checkpoint require for Service Bus Binder queue? If yes, what is the best way to setup for production workload?
I run my application by including the following checkpoint setting from Azure Event Hub namespace. But I don't find corresponding checkpoint blob being created in my defined Azure Storage
spring.cloud.azure.eventhubs.processor.checkpoint-store.account-key=
spring.cloud.azure.eventhubs.processor.checkpoint-store.container-name=
spring.cloud.azure.eventhubs.processor.checkpoint-store.create-container-if-not-exists=
spring.cloud.azure.eventhubs.processor.checkpoint-store.account-name=
spring.cloud.azure.eventhubs.processor.checkpoint-store.blob-name=
Related
Have seen a super weird issue in our aws kubernetes cluster deployment where the in-memory spring cache appeared to be persistent even after a rollout restart and pod deletion. Is that even possible for an argument sake? Deletion of a pod should have deleted the container which should thereby the underlying memory.
Please share your thoughts, as there is no logs associated to share except the observed behavior.
Environment Details:
Spring Boot 2.7.x
AWS EKS 1.21
Java 17
Spring Cache is an abstraction that provides automatic integration with various persistence technologies besides a simple in-memory cache.
If you have redis configured, for example, and are using spring boot, a default redis caching will take place.
We are trying to spin up a Stateful MQ manager with default storage Class as persistent storage mounted for data in an Azure Kubernetes cluster. Here is the link which we followed. We exposed the service type as LoadBalancer as shown in below command.
helm install stable/ibm-mqadvanced-server-dev --version 3.0.1 --set service.type=LoadBalancer,security.initVolumeAsRoot=true,license=accept
Now, we can able to deploy the MQ pod in AKS and it shows the pod status as running. but unable to get the web interface of MQ. Here we are taking service type as LoadBalancer and added security groups for NodePorts created by LoadBalancer. still, we are failed to access the service. we have checked pod logs and it shows the logs as Started web server at the end.
Could anybody suggest what might be the reason for not getting the web interface of IBM WebSphere MQ and what are the possible ways to overcome this issue?
I'm looking for the azure alternative for the Data flow model of Data Source-processor-sink.
I want the three entities to be separate microservices. I want to use messaging as a link between these three.
Basically, Source app takes the data from another service and sends it to processor while processor app acts on it and sends relevant notification/alert to sink.
I'm aware I can use rabbitmq for the messaging but I need to know which one will be better in azure - service bus topics or eventhub? and how can I use them?
At the moment, there isn't a Spring Cloud Stream binder implementation for Azure Event Hubs.
Unless we have this, the out-of-the-box or the custom apps cannot be built as a messaging-microservice app, where Spring Cloud Stream provides the programming model and Spring Cloud Data Flow lets you orchestrate the individual microserivces in to a data pipeline (i.e., source-processor-sink) via the DSL/Drag-and-Drop GUI.
Microsoft was exploring the binder implementation in the past; possibly it would end up in Azure Spring Boot project. Feel free to drop an issue on their backlog.
I am novice to Spring Cloud Data flow and Stream Cloud Streaming Applications.
Currently my project diagram looks like following :
I route a POST request from outside client using zuul API gateway to a microservice called Composite. Composite creates a stream using REST POST and deployes onto Spring Cloud Data Flow Server. As far as I know the microservices mongodb and file run as co-existing JVM processes. If My client has to know the status of stream, status of the processed data, How should Composite Microservice interact with Spring Cloud Data Flow Server? Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server. Does SCDF expose any hooks to look at the individual apps? Also how can I change the flow #runtime to create a dynamic mesh?
Currently I am using Local Spring Cloud Data Flow Server for development.
Runtime platform is local
Local runtime is recommended only for development purpose and if you're preparing for production, please make sure to choose a platform variant (eg: cf, k8s, yarn, ..) that comes with non-functional requirements to support reliable and durable execution of all the applications running in streaming pipeline.
As far as I know the microservices mongodb and file run as co-existing JVM processes.
If your stream definition is file | mongodb, you'd have 2 different JVM's even when using Local runtime. They're independent Boot applications.
How should Composite Microservice interact with Spring Cloud Data Flow Server?
Not clear what you mean by "composite" here. All the microservice applications in SCDF communicate via messaging middleware such as Kafka or Rabbit. SCDF provides the orchestration capability to run such applications into various runtime platforms.
Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server
You can use SCDF's REST-APIs to query for current status of the apps and it is platform agnostic. You can view the list of supported APIs by hitting the root URL (see image below) - there's a gap in docs - we will fix it. Following APIs could be useful for status checks.
Does SCDF expose any hooks to look at the individual apps?
Once the apps are deployed in a runtime platform, you can take advantage of Boot's actuator endpoints to explore more details such as trace, metrics, health, env among others at each application level. See Boot's actuator endpoints for more details. For instance, if your mongodb app is running locally and on port 23000, then you can check granular metrics for this application at: http://localhost:23000/metrics.
[As an FYI: future SCDF releases would include integrating Spring Boot + Spring Cloud Sleuth metrics and visual representation of the same.]
Also how can I change the flow #runtime to create a dynamic mesh?
If you're referring to editing a running streaming pipeline with addition/deletes, we are currently exploring design approach to support this functionality.
I am new to MDB, so my questions may sound simple.
I implemented an MDB( serving as a Consumer ) using JDeveloper 11.1.7 and built a JAR file using deployment functionality. Now I need to deploy it to WebLogic 10.3 app server. I have several questions:
1) Should I deploy it as a library or as an application?
2) After I successfully deploy and it's in the "RUNNING" mode I assume it should be listening to the particular Queue I specified as a Resource in my MDB implementation. Is that correct?
3) When implementing an MDB all the examples only specify the "destination" but not the "ConnectionFactory". How does it know where to connect to?
Should I deploy it as a library or as an application?
Deploy as an application since the MDB will likely contain business logic specific to the app.
After I successfully deploy and it's in the "RUNNING" mode I assume it
should be listening to the particular Queue I specified as a Resource
in my MDB implementation. Is that correct?
Yes, if your JMS provider is local, specify the name bound in the local JNDI tree for the destination using destination-jndi-name.
When implementing an MDB all the examples only specify the
"destination" but not the "ConnectionFactory". How does it know where
to connect to?
If the MDB is consuming messages from the local WebLogic JMS provider, the container manages configuration for the connections and sessions automatically, so don't set provider-url, initial-context-factory, or connection-factory-jndi-name, unless you have a custom factory to use.
Refer to WebLogic 10.3 documentation for details: