I want to use AWS SQS for communication between my microservices (and later possibly SNS). Each microservice can have multiple instances up.
Currently I'm trying to implement the Request/Response pattern of message queues.
As I understand it, the normal way is to have one request queue, and pass a unique response queue per service instance.
The consuming service will process the message and send the response to the given response queue. Thus, the response will always be returned to the correct instance of the requesting service.
My problem now comes with Cloudfoundry.
How it should work:
Service A needs to request data from Service B.
There is one queue named A-request-B.
Service A starts with 6 instances.
Every instance creates its own queue: B-response-A-instance[x]
Every request from an instance of A sends their response queue name in the request so response is routed to the correct queue.
This is the only way I know to guarantee that the response from B gets to the correct instance of A.
This doesn't work as Cloudfoundry doesn't allow the "create-queue" call from SQS, even if I can connect to the SQS instance to send and receive messages.
The only way to create a queue is via the command line.
So I would have to create these 6 response-queues manually beforehand.
And if I start a 7th instance of A, it will fail as it doesn't have its own response queue.
I also tried using the SQS temporary queues, but they also work by creating queues dynamically which is not possible in Cloudfoundry.
I'm currently stuck with SQS, so switching to kafka/rabbitmq or something else is not possible.
Is there any other way to pass a response to the matching service instance? Or is there another way to create queues in cloud foundry?
Summary from comments above...
This doesn't work as Cloudfoundry doesn't allow the "create-queue" call from SQS
Cloud Foundry doesn't really care what messaging system you're using, unless you're using a Marketplace service to create it. In that case, Cloud Foundry will work on your behalf to create a service instance. It does this by talking to a service broker, which does the actual creation of the service instance and user credentials.
In your case, Cloud foundry handles creating the credentials to the AWS SQS through the AWS Service Broker. Unfortunately, the credentials the broker gives you don't have the permission to create queues. The creds are only allowed to send and receive messages for the specific queue that was created by the broker.
There's not a lot you can do about this, but there's a couple options:
Don't use the Marketplace service. Instead, just go to AWS directly, create an IAM user, create your SQS resources, and give the IAM user permissions to them.
Then create a user provided service with the credentials and information for the resources you created. You can bind the user provided service to your apps just like a service created by the AWS Service broker. You'd lose the convenience of using the broker, but you won't have to jump through the hoops you listed when scaling up/down your app instances.
You could create a service instance through the broker, then create a service key. The service key is a long-lived set of credentials so you could then go into AWS, look up the IAM user associated with that service key and adjust the permissions so that you can create queues.
You would then need to create a user provided service, like the first option, insert the credentials and information for your service key and bind the user provided service to any apps that you'd like to use that service.
Don't delete the service key, or your modified user will be removed and your user provided service will stop working.
Hope that helps!
Related
I'm using MassTransit with Azure Service Bus as a transport. Some endpoints will live outside of our network, so I'd like to restrict the connection strings to those endpoint queues/topics while allowing the endpoints that are on our network to send to all of the other endpoints.
Is this possible? If I try to set a connectionstring like that, errors indicating the lack of permissions to a topic that I don't think I need it to access.
MassTransit requires Manage since it creates any topics/queues at startup.
If you are only sending messages to a specific queue, I have heard of some having success by ensuring the queue already exists and has the appropriate access for the credentials, but I don't know the details. In the one case I know of, they were using queue:name with GetSendEndpoint on IBus, and then calling Send.
I have exposed a websocket enabled service endpoint through Azure Application Gateway and the service is hosted on azure service fabric. Client initiates a websocket connection with my endpoint and is able to exchange data. During certain message flows, my Web Socket enabled service calls other services hosted on the service fabric using azure service bus. These are handled in a completely async manner. Once the other services finish processing, they post a message to the service bus which my WebSocket service reads back.
The problem I am having is to route the messages back to the right service fabric node so that it can be pushed back to the client at the other end of the WebSocket connection
In the picture below, you can imagine each node containing multiple services including the web socket enabled service. Once the Websocket service posts a message to the service bus, the downstream services start processing and finally they post a message back to the service bus which the websocket service reads back. Here a random node will pick up the message and it might not have the relevent websocket connection to push the processed data back
Sample Design
I have looked at redis pubsub model and it looks like I have to maintain last message processed on the nodes. It also means, every node on the cluster will need to read the message and discard it if they don't have the websocket connection with the client. I am looking for any suggested design models for this kind of problem
I ran into a similar scenario and didn't like the idea of using a new external service (Redis/SQL Server) as a backplane that would simply duplicate each message/event across all nodes.
The solution I settled on was to lean on a property of actor proxies, using actor events to call-back to a specific instance of a stateless service. Creating an actor service to act as a pub/sub backplane.
The solution is summarised in this blog post and this GitHub repo. It's worth pointing out that the documentation states actor events are best effort. This hasn't really been an issue when the application is running as normal, I presume that during a deployment or failover, some events may get lost, however this could be mitigated with additional work.
It's also worth noting that your load balancing rules should maintain sticky connections between clients and back-end instances. You could create separate rules for websockets if you only wanted this to apply to them and not your regular HTTP traffic.
Per link! here, Azure Functions Service Bus trigger lets you listen on Azure Service Bus. We are currently using mostly AWS for our cloud services. And we are working with vendor who has real time notifications using Azure service bus. I would like to know if there is anyway to connect to service bus using lambda. Anytime there is a new message on the bus, we would like our AWS lambda to invoke and take it from there.
It's not possible. However you can use Azure functions (Azure serverless offering) triggered by Azure Service bus to consume the messages.
If you really want cross vendor trigger then you need to consume azure service bus message, convert the message into http payload and trigger AWS lambda with Http payload that has message contents.
Cloudwatch Event Rule: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html
You specify your event source -- a supported Service and an action/API call and the targets and setting up the required IAM setting(Lambda permission etc. if you create from IaC tools like terraform..) And you are good to go!
Then as long as Cloudwatch event rule is up, all the events that falls into the rule you specify will trigger your lambda.
Event rule can also be used a "cron schedule" for lambda, which I have been using. I did encounter some delay very rarely tho.
Update: to make it as real time as possible, you would need to make modification to your vendor's Azure account to enable some message pushing to your AWS endpoint(an API gateway), which I assume is a NO. Other than that, a AWS self contained solution is to set up a cloudwatch event rule to pull your vendor's Azure HTTP endpoint every 1 minutes and store the stuff in your own SQS queues.
Ok, so I have an elastic beanstalk application with a scalable web tier, served behind an ELB. Long story short, I need to be able to subscribe a specific instance within my web tier to an SNS topic. Is it safe for me to use a standard method to get the instance ip address (as detailed in python here How can I get the IP address of eth0 in Python?) and then simply subscribe to an SNS topic using that ip as an http subscriber?
Why? Good question...
My datamodel is made up of lots of objects many of which can have an attached set of users which may want to be able to observe those objects. This web tier in my application is responsible for handling the socket interface (using socket.io) for client applications.
When a user is created in the system, so too is an SNS topic for the user, allowing notifications to be pushed to that user when an object it is interested in changes. The way I am planning to set this up, a client application will connect to EB via socket.io at which point the server instance it connected to will subscribe to that user's SNS topic. Then when an interesting object changes, notifications will be posted to the associated user's topics, thus notifying the server instance that the client application has an open connection to, which can then send a message down the socket.
I believe it is important that the specific instance is subscribed rather than the web tier's external CNAME or ip, as the client application is connected to a specific instance and so only that instance can send messages over it's socket. Subscribing the load balancer would be no good as the notification may be delivered to an instance that the user is not connected to.
I believe the question at the top is all I need, but I'm open to creative solutions if my reasoning seems flawed??
Just incase anyone gets stuck down this same rabbit hole... The solution was to use Redis pub/sub rather than SNS and SQS.
Currently I'm building an application in a micro service architecture.
The first application is an API that does the user authentication, receive requests to initiate/keep a realtime connection with the user (via Socket.io or SockJS) and the system store the socket id into the User object.
The second application is a WORKER doing some stuff and sometime he has to send realtime data to the user.
The question is: How should the second application (the WORKER) send realtime data to the user?
Should the WORKER send a message to the API then the API forward this message to the user?
Or the WORKER can directly send the message to the user?
Thank you
In a perfect world example, the service that are responsible to send "publish" a real time push notifications should be separated from other services. Since the micro service is a set of narrowly related methods, and there is no relation between the authentication "user" service, and the realtime push notification service. And to a deep break down, the authentication actually is a separate service, this only FYI, There might be a reason you did this way.
How the service would communicate? There is actually many ways how to implement the internal communication between the services, MQ solution, which could add more technology to your stack, like Rabbit MQ, Beanstalk, Gearman, etc...
And also you can do the communication on top of HTTP protocal, but you need to consider that the HTTP call will add more cost.
The perfect solution is that each service will have to interfaces to execute on their behalf, HTTP interface and an MQ interface (console)