Multiple consumers per StreamingAmf connection? - flex4

I have an Adobe Air 2.0 application that is also utilizing Spring BlazeDS integration. Inside this application I have a couple of data grids. The design was for each grid's model to register a Consumer to listen for changes pushed from BlazeDS. The first grid instantiated works correctly, however each subsequent grid causes the following warning in BlazeDS
[WARN] [Endpoint.StreamingAMF] Endpoint with id 'streaming-amf' received a duplicate streaming connection request from, FlexClient with id ''
I was under the impression you could have multiple consumers inside a Flex/Air application. Am I mistaken or have I just missed something in my configuration?
Server side channel definition
<channel-definition id="streaming-amf" class="mx.messaging.channels.StreamingAMFChannel">
<endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/streamingamf" class="flex.messaging.endpoints.StreamingAMFEndpoint"/>
<properties>
<add-no-cache-headers>false</add-no-cache-headers>
<max-streaming-clients>15</max-streaming-clients>
<user-agent-settings>
<user-agent match-on="AdobeAIR" kickstart-bytes="2048" max-streaming-connections-per-session="2" />
<user-agent match-on="MSIE" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
<user-agent match-on="Firefox" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
</user-agent-settings>
</properties>
</channel-definition>
Code for Channelset
<s:ChannelSet id="pricingCS">
<s:channels>
<s:StreamingAMFChannel id="streaming-amf"
url="http://localhost:8080/blazeds/messagebroker/streamingamf"
connectTimeout="5"/>
</s:channels>
</s:ChannelSet>
Code for Consumer
consumer = new Consumer();
consumer.id = "pricingConsumer";
consumer.destination = "pricingUpdates";
consumer.subtopic = pId;
consumer.channelSet = channelSet;
consumer.addEventListener(MessageEvent.MESSAGE, priceUpdate);
consumer.addEventListener(MessageFaultEvent.FAULT, priceUpdateFail);
consumer.subscribe();

Related

How Do I Connect Stomp Client to An ActiveMQ Artemis Destination Created Using JMS(Spring Boot)?

CONTEXT
I am trying to learn about SpringJMS and MOMS and I am using ActiveMQ Artemis for this. I created a Queue Destination address using the jakarta.jms.* API, and managed to send some message to the queue like this:
public void createUserDestination(String userId) throws JMSException {
queueDestination = setupConnection().createQueue("user" + userId);
producer = session.createProducer(queueDestination);
producer.setDeliveryMode(DeliveryMode.PERSISTENT);
producer.send(session.createTextMessage("Testing queue availability"));
connection.close();
log.info("successfully created group... going back to controller");
}
So for example, if I pass an ID of user12345abc, I get a Queue Address user12345abc, of the routing type ANYCAST with one queue underneath(with that same address) with my message placed there.
PROBLEM
Now, I wanted to write a simple web front-end with STOMP that can connect to this queue. But I have been having a ton of problems connecting to that queue address because each time I try to connect by providing the destination address, it creates a new address in the MOM and connects to that instead.
My STOMP code looks like this(the first argument is the destination address, you can ignore the rest of the code):
stompClient.subscribe("jms.queue.user12345abc", (message) => {
receivedMessages.value.push(message.body);
});
In this case, completely brand new queue is created with the address jms.queue.user12345abc which is not what I want at all.
I configured my Spring Backend to use an external MOM broker like this(I know this is important):
public void configureMessageBroker(MessageBrokerRegistry registry) {
// these two end points are prefixes for where the messages are pushed to
registry.enableStompBrokerRelay("jms.topic", "jms.queue")
.setRelayHost("127.0.0.1")
.setRelayPort(61613)
.setSystemLogin(brokerUsername)
.setSystemPasscode(brokerPassword)
.setClientLogin(brokerUsername)
.setClientPasscode(brokerPassword);
// this prefixes the end points where clients send messages
registry.setApplicationDestinationPrefixes("/app", "jms.topic", "jms.queue");
// this prefixes the end points where the user's subscribe to
registry.setUserDestinationPrefix("/user");
}
But it's still not working as I expect it to. Am I getting some concept wrong here? How do I use STOMP to connect to that queue I created earlier with JMS?
It's not clear why you are using the jms.queue and jms.topic prefixes. Those are similar but not quite the same as the jms.queue. and jms.topic. prefixes which were used way back in ActiveMQ Artemis 1.x (whose last release was in early 2018 almost 5 years ago now).
In any case, I recommend you use the more widely adopted /queue/ and /topic/, e.g.:
public void configureMessageBroker(MessageBrokerRegistry registry) {
// these two end points are prefixes for where the messages are pushed to
registry.enableStompBrokerRelay("/topic/", "/queue/")
.setRelayHost("127.0.0.1")
.setRelayPort(61613)
.setSystemLogin(brokerUsername)
.setSystemPasscode(brokerPassword)
.setClientLogin(brokerUsername)
.setClientPasscode(brokerPassword);
// this prefixes the end points where clients send messages
registry.setApplicationDestinationPrefixes("/app", "/topic/", "/queue/");
// this prefixes the end points where the user's subscribe to
registry.setUserDestinationPrefix("/user");
}
The in broker.xml you'd need to add the corresponding anycastPrefix and multicastPrefix values on the STOMP acceptor, e.g.:
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;anycastPrefix=/queue/;multicastPrefix=/topic/</acceptor>
To be clear, your JMS code will stay the same, but your STOMP consumer would be something like:
stompClient.subscribe("/queue/user12345abc", (message) => {
receivedMessages.value.push(message.body);
});

Updating Apache Camel JPA object in database triggers deadlock

So I have a Apache Camel route that reads Data elements from a JPA endpoint, converts them to DataConverted elements and stores them into a different database via a second JPA endpoint. Both endpoints are Oracle databases.
Now I want to set a flag on the original Data element that it got copied successfully. What is the best way to achieve that?
I tried it like that: saving the ID in the context and then reading it and accessing a dao method in the .onCompletion().onCompleteOnly().
from("jpa://Data")
.onCompletion().onCompleteOnly().process(ex -> {
var id = Long.valueOf(getContext().getGlobalOption("id"));
myDao().setFlag(id);
}).end()
.process(ex -> {
Data data = ex.getIn().getBody(Data.class);
DataConverted dataConverted = convertData(data);
ex.getMessage().setBody(data);
var globalOptions = getContext().getGlobalOptions();
globalOptions.put("id", data.getId().toString());
getContext().setGlobalOptions(globalOptions);
})
.to("jpa://DataConverted").end();
However, this seems to trigger a deadlock, the dao method is stalling on the commit of the update. The only explanation could be that the Data object gets locked by Camel and is still locked in the .onCompletion().onCompleteOnly() part of the route, therefore it can't get updated there.
Is there a better way to do it?
Have you tried using the recipient list EIP where first destination is the jpa:DataConverted endpoint and the second destination will be the endpoint to set the flag. This way both get the same message and will be executed sequentially.
https://camel.apache.org/components/3.17.x/eips/recipientList-eip.html
from("jpa://Data")
.process(ex -> {
Data data = ex.getIn().getBody(Data.class);
DataConverted dataConverted = convertData(data);
ex.getIn().setBody(data);
})
.recipientList(constant("direct:DataConverted","direct:updateFlag"))
.end();
from("direct:DataConverted")
.to("jpa://DataConverted")
.end();
from("direct:updateFlag")
.process(ex -> {
var id = ((MessageConverted) ex.getIn().getBody()).getId();
myDao().setFlag(id);
})
.end();
Keep in mind, you might want to make the route transactional by adding .transacted()
https://camel.apache.org/components/3.17.x/eips/transactional-client.html

Listener for NATS JetStream

Can some one help how to configure NATS jet stream subscription in spring boot asynchronously example: looking for an equivalent annotation like #kafkalistener for Nats jetstream
I am able to pull the messages using endpoint but however when tried to pull messages using pushSubscription dispatcherhandler is not invoked. Need to know how to make the listener to be active and consume messages immediately once the messages are published to the subject.
Any insights /examples regarding this will be helpful, thanks in advance.
I don't know what is your JetStream retention policy, neither the way you want to subscribe. But I have sample code for WorkQueuePolicy push subscription, wish this will help you.
public static void subscribe(String streamName, String subjectKey,
String queueName, IMessageHandler iMessageHandler) throws IOException,
InterruptedException, JetStreamApiException {
long s = System.currentTimeMillis();
Connection nc = Nats.connect(options);
long e = System.currentTimeMillis();
logger.info("Nats Connect in " + (e - s) + " ms");
JetStream js = nc.jetStream();
Dispatcher disp = nc.createDispatcher();
MessageHandler handler = (msg) -> {
try {
iMessageHandler.onMessageReceived(msg);
} catch (Exception exc) {
msg.nak();
}
};
ConsumerConfiguration cc = ConsumerConfiguration.builder()
.durable(queueName)
.deliverGroup(queueName)
.maxDeliver(3)
.ackWait(Duration.ofMinutes(2))
.build();
PushSubscribeOptions so = PushSubscribeOptions.builder()
.stream(streamName)
.configuration(cc)
.build();
js.subscribe(subjectKey, disp, handler, false, so);
System.out.println("NatsUtil: " + durableName + "subscribe");
}
IMessageHandler is my custom interface to handle nats.io received messages.
First, configure the NATS connection. Here you will specify all your connection details like server address(es), authentication options, connection-level callbacks etc.
Connection natsConnection = Nats.connect(
new Options.Builder()
.server("nats://localhost:4222")
.connectionListener((connection, eventType) -> {})
.errorListener(new ErrorListener(){})
.build());
Then construct a JetStream instance
JetStream jetStream = natsConnection.jetStream();
Now you can subscribe to subjects. Note that JetStream consumers can be durable or ephemeral, can work according to push or pull logic. Please refer to NATS documentation (https://docs.nats.io/nats-concepts/jetstream/consumers) to make the appropriate choice for your specific use case. The following example constructs a durable push consumer:
//Subscribe to a subject.
String subject = "my-subject";
//queues are analogous to Kafka consumer groups, i.e. consumers belonging
//to the same queue (or, better to say, reading the same queue) will get
//only one instance of each message from the corresponding subject
//and only one of those consumers will be chosen to process the message
String queueName = "my-queue";
//Choosing delivery policy is analogous to setting the current offset
//in a partition for a consumer or consumer group in Kafka.
DeliverPolicy deliverPolicy = DeliverPolicy.New;
PushSubscribeOptions subscribeOptions = ConsumerConfiguration.builder()
.durable(queueName)
.deliverGroup(queueName)
.deliverPolicy(deliverPolicy)
.buildPushSubscribeOptions();
Subscription subscription = jetStream.subscribe(
subject,
queueName,
natsConnection.createDispatcher(),
natsMessage -> {
//This callback will be called for incoming messages
//asynchronously. Every subscription configured this
//way will be backed by its own thread, that will be
//used to call this callback.
},
true, //true if you want received messages to be acknowledged
//automatically, otherwise you will have to call
//natsMessage.ack() manually in the above callback function
subscribeOptions);
As for the declarative API (i.e. some form of #NatsListener annotation analogous to #KafkaListener from Spring for Apache Kafka project), there is none available out of the box in Spring. If you feel like you absolutely need it, you can write one yourself, if you are familiar with Spring BeanPostProcessor-s or other extension mechanism that can help to do that. Alternatively you can refer to 3rd party libs, it looks like a bunch of people (including myself) felt a bit uncomfortable when switching from Kafka to NATS, so they tried to bring the usual way of doing things with them from the Kafka world. Some examples can be found on github:
https://github.com/linux-china/nats-spring-boot-starter,
https://github.com/dstrelec/nats
https://github.com/amalnev/declarative-nats-listeners
There may be others.

Asp .Net Core Web API where to subscribe RabbitMQ

I am trying to implement publish/subscribe architecture using Web API and Rabbit MQ message broker.
I have two projects in my solution: Publisher and Subscriber.
Publishing is implementing successfully but I cannot find place in my
subscriber project to read published message from the queue.
Both of my projects are .Net Core ASP WEB API
Thanks in advance
Register rabbitMq as HostedService using the AddSingleton method in ConfigureServices Method. IHostedService internally calls ApplicationGetStarted event. So the rabbit starts listening there
public void ConfigureServices(IServiceCollection services)
{
services.AddMassTransit(x =>
{
x.UsingRabbitMq();
});
// OPTIONAL, but can be used to configure the bus options
services.AddOptions<MassTransitHostOptions>()
.Configure(options =>
{
// if specified, waits until the bus is started before
// returning from IHostedService.StartAsync
// default is false
options.WaitUntilStarted = true;
// if specified, limits the wait time when starting the bus
options.StartTimeout = TimeSpan.FromSeconds(10);
// if specified, limits the wait time when stopping the bus
options.StopTimeout = TimeSpan.FromSeconds(30);
});
}
}

Getting exception while Consuming https Webservice in mule

I'm trying to call a https web service using cxf generated client proxies within Mule. Almost 99% of the time, I get
Caused by: org.apache.commons.httpclient.ProtocolException: Unbuffered entity enclosing request can not be repeated.
at org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:487)
at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)*
The app has http inbound end point. The Mule Java transformer tries to call a webservice using https using cxf generated client proxies. I'm running into above said exception.
I've provided screenshot the mule flow [http://i.stack.imgur.com/7X9Wg.jpg]. Much appreciated!!
Mule config xml
<cxf:jaxws-service serviceClass="test.service.https.TestService" doc:name="SOAP" configuration-ref="CXF_Configuration" enableMuleSoapHeaders="false"/>
<custom-transformer class="test.service.https.CallLicenseService" doc:name="Calls HTTPS WS using CXF generated client proxies" encoding="UTF-8" mimeType="text/plain"/>
<logger message="Success" level="INFO" doc:name="Logger"/>
<set-payload value="#['HELLO SUCCESS']" doc:name="Set Payload"/> </flow>
Transformer
URL wsdlURL = null;
String serviceUrl = "TARGET_HTTPS_WSDL"; //This would be the target https URL
try {
wsdlURL = new URL(serviceUrl);
} catch (MalformedURLException e) {
Logger.getLogger(getClass()).info("", e);
}
AuditLogServiceService ss = new AuditLogServiceService(wsdlURL);
AuditLoggingService port = ss.getAuditLoggingServicePort();
((BindingProvider) port).getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY,
serviceUrl.substring(0, serviceUrl.length() - 5));
AuditLogServiceRequest request = new AuditLogServiceRequest();
request.setClientId("4");
request.setUserId("101");
request.setEventSubType("1");
request.setEventType("1");
AuditLogMessage msg = new AuditLogMessage();
msg.setMessage("Hello Test");
request.getLogMessages().add(msg);
AuditLogServiceResponse response = port.logEvent(request);
System.out.println(response.getMessage());
return response.getMessage();
First of all if you need to consume a webservice You need to put <cxf:jaxws-client serviceClass instead of cxf:jaxws-client ...next step is you need to use an http outbound endpoint to post to the external webservice ... pls refer the following link :- http://www.mulesoft.org/documentation/display/current/Consuming+Web+Services+with+CXF
One more thing .. you need to use java component instead of <custom-transformer class ..you need to set the payload just before the component ... I mean you need to set the payload before posting it to external webservice

Resources