Azure Messaging FILO (First In Last Out) - azure-servicebus-queues

How is First in Last Out(FILO) done in Azure messaging framework? All the articles are pointing to FIFO and azure service bus Queue.

Messaging will not provide you what you're looking for.
If you think about it, at any time for any message that is last in a queue there might be another message coming in, throwing off your LIFO order as the N-1 message will be processed first rather than N-th message. You will need a data store to ingest those messages and query in the descending order.

Unfortunately, you can't do FILO for Azure queues.
Azure has two types of queues
Azure Storage Queues - Does not gurantee FIFO
Azure Service Bus Queues - Gurantee FIFO
You can refer here to check the difference between them.

Related

RabbitMQ architecture to manage trips and coordinates

I'm new to RabbitMQ, but I know that my use case fits well in this kind of architecture. What I want to achieve is the following.
Using an android application, the user will push the "start trip" button. This will call to an API which will create the trip. Then, the android application will send data periodically, gps coordinates, to the API (which will accomplish some task). When the user finishes the trip, another call to the API will be made.
Until now, the API was a simple Restful written using spring boot. Now, I want to make changes to the architecture and add RabbitMQ.
I've thought that whenever a trip is started, the API will create a Queue (queue_trip_XXX, as XXX is the trip identificator), bound to a exchange (trips_exchange) with a routing key (trip_XXX). Then, but dynamically, gps coordinates will be sent to the exchange and routed to the corresponding queue. When the user ends the trip, the queue will be removed.
So, there will be one queue for each trip and a unique exchange. Is this appropriate? Do you have any other solution that would best fit to this use case?
Another question is how can I create a consumer which listens to messages sent to a queue?
Thanks!
So, there will be one queue for each trip and a unique exchange. Is this appropriate?
As I've mentioned in the comment I don't think it's a good idea due to the fact that every queue in RabbitMQ is a separate Erlang process.
Is there any reason why you would like to process messages from one trip separately from the others? Maybe one queue will be enough for start?
Another question is how can I create a consumer which listens to messages sent to a queue?
I assume you already have two nodes (one for API and one for RabbitMQ broker).
You should just create the third one which will be responsible for processing the data.

Microservice retrieving OrderId from an Azure service bus topic

I am trying to understand Azure service bus which I intend
to use in a current project. I am completely new to this.
Here is the scenario. There are two microservices.
Microservice A - Writes order details to a database and also writes
OrderId to an Azure service bus topic
Microservice B - Should be able to pick up the OrderId when ever this exists
from the same topic and use it to process some other transactions. There can be multiple OrderId's generated in a day by users.
How do I setup Microservice B to perform this duty? How does this work in reality?
How can this microservice constantly monitor the topic?
You can do write a job to check new message every "X" seconds or continuously but that's not a good approach. A better way is Pub-sub approach .
In pub-sub when message arrives at the topic , any service subscribed to topic receive the message and doesn't need to poll for it. Your Microservice B should be subscribed to the topic and message will be pushed to it.
For your microservice B to subscribed to message you have two options. Either you can azure function which will automatically process the message for you. If you don't want to write the azure function then you will have to use Azure Service Bus Library to get this done. The official example only accommodates console app (Pub-Sub Azure Service Bus) however for .Net Core Web API you can look here Subscribe Azure Service Bus

Google Cloud Platform - Ordering of Messages in Console v. Logs

I'm using a Google Cloud Platform Cloud Function. I trigger it via Pub/Sub. In the function logs, the messages are appearing in the order in which they were triggered (newest on top). But if I create a subscription to the published topic and view it in the console like this:
cloud beta pubsub subscriptions pull test_sub --limit 1000 --auto-ack
the messages appear in random order.
Any idea why?
Google Cloud Pub/Sub does not guarantee the order of messages. There is no attempt to order messages at all. This would break or complicate sharding and clustering of resources.
To quote Google Cloud:
Even in this simple case, guaranteed ordering of messages would put
severe constraints on throughput.
For a best-case design, your software should not assume, nor rely upon any specific order for messages. Messages should be atomic units that are not dependent on other messages either before or after. If this is the case for your designs, then you will need to implement time windows and process messages independently of delivery/pull.
For more specific information on Pub/Sub message ordering:
Pub/Sub Ordering messages

Using Akka.net / Actor System for an ETL process

I'm new in the world of actor modeling and I am in love with the idea. But does some pattern exists for processing a batch of messages simply for bulk storage in a safe manner?
I'm afraid if I read 400 messages of expected 500 and put them in a list, if the system closes, I don't want to lose those 400 messages from the (persisted)
mailbox. In a service bus world you could ask for a batch of messages and only when processed, commit all of them. Thank you.
You may want to combine your actor system with some service bus/reliable queues, like RabbitMQ or Azure Service Bus, at use actor system only for message processing.
From within Akka.NET itself, you have persistence extension, which can be used for storing actor state in persistent backend of your choice. It also contains a dedicated kind of an actor, AtLeastOnceDeliveryActor that may be used to resend messages until they will be confirmed.
you can extend split and aggregate in your ESB to do it, I made something similar with mule ESB from long time.

JMS 2.0: Shared-Durable-Consumer on Topic vs Asynchronous-Consumer on Queue; Ref. Official GlassFish 4.0 docs/javaee-tutorial Java EE 7

Ref: Official GlassFish 4.0 docs/javaee-tutorial Java EE 7
Firstly, let us start with the destination-type of: topic.
As per GlassFish 4.0 tutorial, section “46.4 Writing High Performance and Scalable JMS Applications”:
This section describes how to use the JMS API to write applications
that can handle high volumes of messages robustly.
In the subsection “46.4.2 Using Shared Durable Subscriptions”:
The SharedDurableSubscriberExample.java client shows how to use shared
durable subscriptions. It shows how shared durable subscriptions
combine the advantages of durable subscriptions (the subscription
remains active when the client is not) with those of shared consumers
(the message load can be divided among multiple clients).
When we run this example as per “46.4.2.1 To Run the ShareDurableSubscriberExample and Producer Clients”, it gives us the same effect/functionality as previous example on destination-type of queue: if we follow “46.2.6.2 To Run the AsynchConsumer and Producer Clients”, points 5 onwards – and modify it slightly using 2 consumer terminal-windows and 1 producer terminal-window.
Yes, section “45.2.2.2 Publish/Subscribe Messaging Style” does mention:
The JMS API relaxes this requirement to some extent by allowing
applications to create durable subscriptions, which receive messages
sent while the consumers are not active. Durable subscriptions provide
the flexibility and reliability of queues but still allow clients to
send messages to many recipients.
.. and anyway section “46.4 Writing High Performance and Scalable ..” examples are queue style – one message per consumer:
Each message added to the topic subscription is received by only one
consumer, similarly to the way in which each message added to a queue
is received by only one consumer.
What is the precise technical answer for: why, in this example, the use of Shared-Durable-Consumer on Topic is supposed to be, and mentioned under, “High Performance and Scalable JMS Application” vs. use of Asynchronous-Consumer on Queue?
I was wonderign about the same issue, so I found out the following link. I understand that John Ament gave you the right reponse, maybe it was just too short to get a full understand.
Basically, when you create a topic you are assuming that only the subscribed consumers will receive its messages. However processing such a message may requires a heavy processing; in such a cases you can create a shared topic using as much threads as you want.
Why not use a queue? The answer is quite simple, if you use a queue only one consumer will be able to handle such a message.
In order to clarify I will give you an example. Let's say a federal court publishes thousand of sentences every day and you have three distinct applications that depends on it.
Application A just copy the sentences to a database.
Application B parse the sentence and try to find out all relation between people around all previously saved sentences.
Application C parse the sentence and try to find out all relation between companies around all previously saved sentences.
You could use a Topic for the sentences, where Application A, B and C would be subscribed. However it easy to see that Application A can process the message very quicly while Application B and C may take some time. An available solution would consist of create a shared subscription for application B and another one to application C, so multiple threads could act on each of them simultaneouly...
...Of course there are other solutions, you could for example use a unshared topic (i.e. a regular one) and post all received messages on a ArrayBlockingQueue that would be handled by a pool of threads some time later; howecer in such a decision the developer would be the one to worry about queue handling.
Hope this can help.
The idea is that you can have multiple readers on a subscription. This allows you to read more messages faster, assuming you have threads available.
JMS Queue :
queued messages are persisted
each message is guaranteed to be delivered once-and-only-once, even no consumer running when the messages are sent.
JMS Shared Subscription :
subscription could have zero to many consumers
if messages sent when there is no subscriber (durable or not), message will never be received.

Resources