I have successfully updated a MassTransit app from 2.x to 3.x and switched to RabbitMQ for my transport. I did this to get one-to-many messaging to function properly, which the previous developer thought would work with MSMQ but I found it was not working and it became clear by reading the documentation that I would need to use 3.x and RabbitMQ.
My application has multiple instances of a website running on the server, with each instance for a specific customer base. I want each instance to publish to specific queues so that the data is only available to the back-end processes for the particular instance. I can easily configure each of these processes to only read from specific queues, but how do I get MassTransit to publish only to specific queues.
You should probably configure a separate RabbitMQ virtual host for each customer, and point that customer's web site instance to that specific virtual host. That way, each way site has its own virtual service for message traffic, keeping it isolated from the other.
Related
We have below flow on-prem to reload master data in cache of API servers. API servers are spring boot applications.
User uploads master data via excel files using admin UI.
These files is placed on NAS storage. After this, admin app calls REST endpoints on multiple API servers to reload this master data into API server memory cache.
We have plans to move this set up to azure cloud whereby API servers would be deployed on VM scale sets with auto scaling enabled.
Given that number of VMs, their IPs would vary, how can we support this master data reload in azure environment? One option I can think of -
Admin app reads the master data file and pushes it to a queue (either azure queue storage or active MQ)
API servers either have listeners or schedulers to get message from queue and reload master data.
Is this the best approach? But with queuing solutions, once message is read off queue by one API server, it will not be available for the other instances right?
Could anyone please advise on alternatives to support this master data reload in azure environment with minimal changes to current application?
Regards
Jacob
I'm already using RabbitMQ as queue 'buffer' and as messaging bus but I'm considering moving to MassTransit to make it more easy to use.
We run in a multi-tenant environment, and to isolate our tenants we have created a dedicated vhost for each tenant plus a "common" vhost for non-tenant related messages.
I would like to know if there's a Best Practice for multi-tenancy with MassTransit and if it is possible to reproduce the same schema (1 vhost per tenant) with MassTransit.
Can I create multiple instance of IBusControl (one per tenant linked to a dedicated IRabbitMqHost) in the same process ?
Yes, MassTransit allows the creation of as many bus instances as you need, and you could create on per vhost without any issues. Just make sure your RabbitMQ server is configured to allow enough connections/sessions to support the total number of tenants, queues, and exchanges.
I am working on a prototype for a client where, on AWS auto-scaling is used to create new VMs from Amazon Machine Images (AMIs), using Akka.
I want to have just one actor, control access to the database, so it will create new children, as needed, and queue up requests that go beyond a set limit.
But, I don't know the IP address of the VM, as it may change as Amazon adds/removes VMs based on activity.
How can I discover the actor that will be used to limit access to the database?
I am not certain if clustering will work (http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html), and this question and answers are from 2011 (Akka remote actor server discovery), and possibly routing may solve this problem: http://doc.akka.io/docs/akka/2.4.16/scala/routing.html
I have a separate REST service that just goes to the database, so it may be that this service will need to do the control before it goes to the actors.
I wonder to know which technique and tools I should use to have the ability to send real time notifications to users. Specifically if I build a messaging system.
I can see that modern social networks can send notifications about new messages almost immediately. Even when the user 'A' from one country writes a message to the user 'B' in another country you can see that the user 'A' writes a message and you immediately see it (even if those users live in different continents).
I tried to figure out how it is possible and find any information about this but without success.
The only thing I found out is the technique when we use a Redis or RabbitMQ server with several servers which acts like publishers and subscribers. Our API servers receive new messages then they push a new message in the queue then subscribers receives the messages and if they have an open WebSocket with the recipient they push this message in the WebSocket and a client receives the message.
But it really won't work if you have a distributed project and your clients are connected to the nearest servers in the nearest data center.
The question is: what technologies/techniques/anything we should use to be able to build notifications in a distributed project?
If you develop your distributed app/system using web technologies, you can consider building what is referred to as a Progressive Web App. With PWAs you can add push notifications in a relatively easy way. You could start with a PWA approach, and then decide later on if developing a native app as well (i.e. iOS or Android) would be necessary.
There are many resources to learn and guide you in developing progressive web apps. Check the references I mentioned above, and you can do this codelab as a starting point.
I am new to Heroku and I am trying to bootstrap a local development environment. Using Foreman, or another tool, can someone please point me to docs that illustrate sending and consuming a message with a worker. Key being setting up the MQ and the worker consuming the message configured locally. Thanks!
IronMQ (and IronWorker) are both cloud services and currently do not have local install capability. It's fairly easy to interact with the API from your local machine though including pushing messages, getting them, etc.
If you plan on using Push Queues, do keep in mind that in order to "push" back to your localhost you'll need to setup something like localtunner or ngrok. Here is some information on that: http://dev.iron.io/mq/reference/push_queues/#testing_on_localhost
Please feel free to hit us at support#iron.io or live chat: get.iron.io/chat
Chad