Spring integration imap - multiple email accounts from the same domain - spring

I'm using spring's imap mechanism in order to recieve emails from my account into my server.
this works like a charm.
Anyhow, a new requirmemnt came up - instead of listening to a single email account i will have to listen on a multiple number of accounts.
Iv'e tried creating a new channel for each of these account. it WORKS!
problem is that each channel i added meaning a new thread running.
since i'm talking about a large number of accounts it is quiet an issue.
My question is:
Since all the email accounts (I would like to listen to) are in the same domain i.e:
acount1#myDomain.com
acount2#myDomain.com
acount3#myDomain.com
....
Is it possible to create a single channel with multiple accounts?
Is there any alternative for me than defining N new channels?
thanks.
Nir

I assume you mean channel adapter, not channel (multiple channel adapters can send messages to the same channel).
No, you can't use a single connection for multiple accounts.
This is a limitation of the underlying internet mail protocols.
If you are using imap idle adapters, yes, this will not scale well because it needs a thread for each. However, if you are only talking about a few 10s of accounts, this is probably not an issue. For a much larger number of accounts, it may be better to use a polled adapter.
But, even so, unless it's a fixed number of accounts, the configuration could be burdensome (but you could programmatically spin up new adapters).
For complex scenarios like this, you may want to consider writing your own "adapter" that uses the JavaMail API directly and manages the connections in a more sophisticated way (but you still need a separate connection for each account). It wouldn't have to be a "real" adapter, just a POJO that interracts with JavaMail. Then, when you receive a message from one of the accounts, send it to a channel using a <gateway/>.

Related

Redis publisher also receiving the message

I am building a scalable chat application using Go and Redis w/ websockets.
I need to publish a new message using redis pub-sub model to other websocket servers to inform all the users (saved in memory of other servers) about the new joined user.
But the issue is, the publisher(also a redis client) receives the same message. Is there a direct way to solve this?
Workaround:
Check if the user for new user in the received event (for publisher) is in the list of current local users everytime.
WHY NEGATIVE VOTES? I'm so pissed at stack-overflow these days. People have no tolerance or too much arrogance

Simple Server to PUSH lots of data to Browser?

I'm building a Web Application that consumes data pushed from Server.
Each message is JSON and could be large, hundreds of kilobytes, and messages send couple times per minute, and the order doesn't matter.
The Server should be able to persist not yet delivered messages, potentially storing couple of megabytes for client for couple of days, until client won't get online. There's a limit on the storage size for unsent messages, say 20mb per client, and old undelivered messages get deleted when this limit is exceeded.
Server should be able to handle around 1 thousand simultaneous connections. How it could be implemented simply?
Possible Solutions
I was thinking maybe store messages as files on disk and use Browser Pool for 1 sec, to check for new messages and serve it with NGinx or something like that? Is there some configs / modules for NGinx for such use cases?
Or maybe it's better to use MQTT Server or some Message Queue like Rabbit MQ with some Browser Adapter?
Actually, MQTT supports the concept of sessions that persist across client connections, but the client must first connect and request a "non-clean" session. After that, if the client is disconnected, the broker will hold all the QoS=1 or 2 messages destined for that client until it reconnects.
With MQTT v3.x, technically, the server is supposed to hold all the messages for all these disconnected clients forever! Each messages maxes out at a 256MB payload, but the server is supposed to hold all that you give it. This created a big problem for servers that MQTT v5 came in to fix. And most real-world brokers have configurable settings around this.
But MQTT shines if the connections are over unreliable networks (wireless, cell modems, etc) that may drop and reconnect unexpectedly.
If the clients are connected over fairly reliable networks, AMQP with RabbitMQ is considerably more flexible, since clients can create and manage the individual queues. But the neat thing is that you can mix the two protocols using RabbitMQ, as it has an MQTT plugin. So, smaller clients on an unreliable network can connect via MQTT, and other clients can connect via AMQP, and they can all communicate with each other.
MQTT is most likely not what you are looking for. The protocol is meant to be lightweight and as the comments pointed out, the protocol specifies that there may only exist "Control Packets of size up to 268,435,455 (256 MB)" source. Clearly, this is much too small for your use case.
Moreover, if a client isn't connected (and subscribed on that particular topic) at the time of the message being published, the message will never be delivered. EDIT: As #Brits pointed out, this only applies to QoS 0 pubs/subs.
Like JD Allen mentioned, you need a queuing service like Rabbit MQ or AMQ. There are countless other such services/libraries/packages in existence so please investigate more.
If you want to role your own, it might be worth considering using AWS SQS and wrapping some of your own application logic around it. That'll likely be a bit hacky though, so take that suggestion with a grain of salt.

Alternative http pubsub platform to SNS

I tried to use SNS as platform to post http messages to clients, but it have 2 major problems.
i can't send the subscribers id's / endpoints dynamically. i must create a topic for every combination, but the combinations change every time according to specific message parameters which change very often.
trying to make a work around the 1 issue, i tried to create a service which will generate the topics run-time, but even when i create new topic i need confirmation from the client after adding him to the subscribers considering this happens pretty often i can't expect clients to confirm being added endlessly which creates an issue even so.
can anyone suggest alternative service which uses http to publish the messages?
Don't use an SNS subscription model and just create endpoints in the SNS application as the users register/login your app.
You will have to store on the back end a mapping of the users account to the endpoint ARN.
FYI, any one user can have many endpoints and some may be invalid.

MassTransit Multiple Consumers

I have an environment where I have only one app server. I have some messages that take awhile to service (like 10 seconds or so) and I'd like to increase throughput by configuring multiple instances of my consumer application running code to process these messages. I've read about the "competing consumer" pattern and gather that this should be avoided when using MassTransit. According to the MassTransit docs here, each receive endpoint should have a unique queue name. I'm struggling to understand how to map this recommendation to my environment. Is it possible to have N instances of consumers running that each receive the same message, but only one of the instances will actually act on it? In other words, can we implement the "competing consumer" pattern but across multiple queues instead of one?
Or am I looking at this wrong? Do I really need to look into the "Send" method as opposed to "Publish"? The downside with "Send" is that it requires the sender to have direct knowledge of the existence of an endpoint, and I want to be dynamic with the number of consumers/endpoints I have. Is there anything built in to MassTransit that could help with the keeping track of how many consumer instances/queues/endpoints there are that can service a particular message type?
Thanks,
Andy
so the "avoid competing consumers" guidance was from when MSMQ was the primary transport. MSMQ would fall over if multiple threads where reading from the queue.
If you are using RabbitMQ, then competing consumers work brilliantly. Competing consumers is the right answer. Each competing consume will use the same receive from endpoint.

Factors Affected for Low Performance of middleware Messaging Softwares

I am planning to inegrate messaging middleware in my web application. Right now I am tesing different messaging middleware software like RabbitMQ,JMS, HornetQ, etc..
Examples provided with this softwares are working but its not giving as desired results.
So, I want to know that which are the factors which are responsible to improve peformance that one should keep in eyes?
Which are the areas, a developer should take care of to improve the performance of middleware messaging software?
I'm the project lead for HornetQ but I will try to give you a generic answer that could be applied to any message system you choose.
A common question that I see is people asking why a single producer / single consumer won't give you the expected performance.
When you send a message, and are asking confirmation right away, you need to wait:
The message transfer from client to server
The message being persisted on the disk
The server acknowledging receipt of the message by sending a callback to the client
Similarly when you are receiving a message, you ACK to the server:
The ACK is sent from client to server
The ACK is persisted
The server sends back a callback saying that the callback was achieved
And if you need confirmation for all your message-sends and mesage-acks you need to wait these steps as you have a hardware involved on persisting the disk and sending bits on the network.
Message Systems will try to scale up with many producers and many consumers. That is if many are producing they should all use the resources available at the server shared for all the consumers.
There are ways to speed up a single producer or single consumer:
One is by using transactions. So, you minimize the blocks and syncs you perform on disk while persisting at the server and roundtrips on the network. (This is actually the same on any database)
Another one, is by using Callbacks instead of blocking at the consumer. (JMS 2 is proposing a Callback similar to the ConfirmationHandler on HornetQ).
Also: most providers I know will have a performance section on their docs with requirements and suggestions for that specific product. You should look individually at each product

Resources