JSON RPC in Golang with AMQP - go

I use "github.com/streadway/amqp" for async processing requests via queue (RabbitMQ).
And I use "github.com/gorilla/rpc" to register my service without workaround, but I have to use ugly solution for conversion amqp.Delivery to http.Request (mux.Server can works with http.Request only).
Can I use more elegant solution for this task?
I can't find JSON RPC router for AMQP.

First, RPC and pub-sub (e.g. AMQP) are two very different beasts; trying to use one to implement the other isn't necessarily wrong or bad, but it's definitely suspicious, and implies that there could be a breakdown somewhere in the design. So I would highly recommend reconsidering the design starting with your business goals and make sure that what you're trying to implement is actually the correct way to achieve the desired functionality.
That said, what you're describing is basically possible, but you want to move your abstraction up a level. Trying to send a http.Request via AMQP is mixing protocols in a way that's only going to lead to more problems. The cleaner way to implement this behavior would be to have an HTTP handler that handles http.Requests (as normal), and a AMQP handler that handles amqp.Deliverys (as normal), and have each of those handlers call a shared business logic handler which deals only in your domain model.
So, your HTTP handler would parse an HTTP request and turn it into a domain object - you don't give any concrete details in the question so I'll invent something like maybe myapp.UserRegistration. Your HTTP handler would pass that to a myapp.UserService which would handle the actual business logic of registering a user, it would return a result, which you would then transform into the appropriate type, marshal to JSON, and send back to the client in an http.Response. myapp.UserService would know nothing about HTTP or AMQP; it operates only on your own domain types.
Your AMQP handler would pick up a message, parse it into the same myapp.UserRegistration type, pass it to the same myapp.UserService handler, and get the same response back - ensuring that the business logic for AMQP and HTTP behaves the same way. Then you'd get your response back, and... well, this is AMQP, so you don't get to send a response to the client. I don't know your setup, maybe you have another queue you can send the response back on, maybe you don't care about the response and can discard it. This is where the difference between RPC and AMQP is most apparent.
This also makes your business logic, HTTP handler, and AMQP handler more testable in isolation because you're separating the protocol logic from the business logic, which can be helpful even when you aren't trying to deal with multiple protocols (i.e. it's not a bad idea even if you're only doing HTTP)
I hope that at least gives you enough info to put you on the right track in your implementation. Good luck!

Related

How does a microservice return data to the caller when using a message broker? or a message queue?

I am prettty new to microservices, and I am trying to figure out how to set a micro-service architecture in which my publisher that emits an event, can receive a response with data from the consumer within the publisher?
From what i have read about message-brokers and message-queues, it seems like it's one-way communication. The producer emits an event (or rather, sends a message) which is handled by the message broker, and then the consumer consumes that event and performs some action.
This allows for decoupled code, which is part of what im looking for, but i dont understand if the consumer is able to return any data to the caller.
Say for example I have a microservice that communicates with an external API to fetch data. I want to be able to send a message or emit an event from my front-facing server, which then calls the service that fetches data, parses the data, and then returns that data back to my servver1 (front-facing server)
Is there a way to make message brokers or queues bidirectional? Or is it only useable in one direction. I keep reading message brokers allow services to communicate with each other, but I only find examples in which data flow goes one way.
Even reading rabbitMQ documentation hasn't really made it very clear to me how i could do this
In general, when talking about messaging, it's one-way.
When you send a letter to someone you're not opening up a mind-meld so that they telepathically communicate their response to you.
Instead, you include a return address (or some other means of contacting you).
So to map a request-response interaction when communicating with explicit messaging (e.g. via a message queue), the solution is the same: you include some directions which the recipient can/will interpret as "send a response here". That could, for instance be, "publish a message on this queue with this correlation ID".
Your publisher then, after sending this message, subscribes to the queue it's designated and waits for a message with the expected correlation ID.
Needless to say, this is fairly elaborate: you are, in some sense, reimplementing a decent portion of a session protocol like TCP on top of a datagram protocol like IP (albeit in this case, we may have some stronger reliability guarantees than we'd get from IP). It's worth noting that this sort of request-response interaction intrinsically couples the two parties (we can't really say "sender and receiver": each is the other's audience), so we're basically putting in some effort to decouple the two sides and then some more effort to recouple them.
With that in mind, if the actual business use case calls for a request-response interaction like this, consider implementing it with an actual request-response protocol (e.g. REST over HTTP or gRPC...) and accept that you have this coupling.
Alternatively, if you really want to pursue loose coupling, go for broke and embrace the asynchronicity at the heart of the universe (maybe that way lies true enlightenment?). Have your publisher return success with that correlation ID as soon as its sent its message. Meanwhile, have a different service be tracking the state of those correlation IDs and exposing a query interface (CQRS, hooray!). Your client can then check at any time whether the thing it wanted succeeded, even if its connection to your publisher gets interrupted.
Queues are the wrong level of abstraction for request-reply. You can build an application out of them, but it would be nontrivial to support and operate.
The solution is to use an orchestration system like temporal.io or AWS Step Functions. These services out of the box provide state management, asynchronous communication, and automatic recovery in case of various types of failures.

Strategy for passing same payload between messages when optional outbound gateways fail

I have a workflow whose message payload (MasterObj) is being enriched several times. During the 2nd enrichment () an UnknownHostException was thrown by an outbound gateway. My error channel on the enricher is called but the message the error-channel receives is an exception, and the failed msg in that exception is no longer my MasterObj (original payload) but it is now the object gotten from request-payload-expression on the enricher.
The enricher calls an outbound-gateway and business-wise this is optional. I just want to continue my workflow with the payload that I've been enriching. The docs say that the error-channel on the enricher can be used to provide an alternate object (to what the enricher's request-channel would return) but even when I return an object from the enricher's error-channel, it still takes me to the workflow's overall error channel.
How do I trap errors from enricher's + outbound-gateways, and continue processing my workflow with the same payload I've been working on?
Is trying to maintain a single payload object for the entire workflow the right strategy? I need to be able to access it whenever I need.
I was thinking of using a bean scoped to the session where I store the payload but that seems to defeat the purpose of SI, no?
Thanks.
Well, if you worry about your MasterObj in the error-channel flow, don't use that request-payload-expression and let the original payload go to the enricher's sub-flow.
You always can use in that flow a simple <transformer expression="">.
On the other hand, you're right: it isn't good strategy to support single object through the flow. You carry messages via channel and it isn't good to be tied on each step. The Spring Integration purpose is to be able to switch from different MessageChannel types at any time with small effort for their producers and consumers. Also you can switch to the distributed mode when consumers and producers are on different machines.
If you still need to enrich the same object several times, consider to write some custom Java code. You can use a #MessagingGateway on the matter to still have a Spring Integration gain.
And right, scope is not good for integration flow, because you can simply switch there to a different channel type and lose a ThreadLocal context.

Combine actor model with RESTful API

I’ve been studying the actor model for some time and trying to figure out how to correctly combine it with a RESTful API. I’m struggling how to separate responsibilities of both layers, either by using the ask-pattern or the actor-per-request. With both patterns request-reply semantics are leaking into the actor model, which seems like an anti-pattern. Most messages, initiated by an HTTP-requests, send to an actor require a reply. The receiving actors have multiple conditionals where it needs signals the API it cannot fulfil the request.
Furthermore, what is considered good practice in regard to input validation; should this be implemented as part of the HTTP (for example if field X is a valid email address, if field Y holds an integer). And for complex domain logic, how/should the actor notify the sender when a (pre-)condition fails?
While request/reply is an anti-pattern in inter-actor communication, nothing stands on your way to use it from outside of an actor system. You can use Ask from there and by using combination of Forward + Tell back to original sender send the reply without necessarily using request/reply model inside of an actor.
When it comes to input validation, ofc the simple validation (field presence, email format etc.) can be easily done on the web framework's level. However more advanced cases (like permission management) will probably make use of actors - at least if your business logic uses them as well.
For complex scenarios - try to think with protocols. Describe set of contracts between actors and/or external services, and use messages to control flow of your logic. It's usually hard to describe that kind of reasoning, but it's usually very easy to draw it with pencil ;)
I.e. you may decide to use some kind of AuthorizationGate actor, which given an unathorized request, will validate it: on auth failure it sends back some RequestFailed message back to original sender (the asker), on success it could transform that message into ValidRequest and send it to actor responsible for handling that message type. Then an actor (which handles only valid requests), processes it, sends RequestSucceed or RequestFailed back to original sender (remember to either store that sender as message field, or use actorRef.Forward instead of actorRef.Tell so you don't override it).

Understanding goroutines for web API

Just starting out with Go and hoping to create a simple Web API. I'm looking into using Gorilla mux (http://www.gorillatoolkit.org/pkg/mux) to handle web requests.
I'm not sure how to best use Go's concurrency options to handle the requests. Did I read somewhere that the main function is actually a goroutine or should I dispatch each request to a goroutine as they are received? Apologies if I'm "way off".
Assuming you're using the Go's http.ListenAndServe to serve your http requests, the documentation clearly states that each incoming connection is handled by a separate goroutine for you. http://golang.org/pkg/net/http/#Server.Serve
You would usually call ListenAndServe from your main function.
Gorilla mux is simply a package for more flexible routing of requests to your handlers than the http.DefaultServeMux. It doesn't actually handle the incoming connection or request just simply relays it to your handler.
I highly suggest you read a bit of the documentation, specifically this guide https://golang.org/doc/articles/wiki/#tmp_3 on writing web applications.
I'm providing an answer even though I voted to close for being too broad.
Anyway, none of that is really necessary. You're over thinking it. If you haven't read this it looks like a decent tutorial; http://thenewstack.io/make-a-restful-json-api-go/
You can really just set up routes like you would with most typical rest frameworks and let the webserver/framework worry about concurrency at the request handling level. You would only employ goroutines to generate the response of a request, say if you needed to aggregate data from 10 files that are all in a folder. Contrived example, but this is where you would spin off 1 goroutine per file, aggregate all the information by reading off a channel in a non-blocking select and then return the result. You can expect all points of entry to your code are called in an asynchronous, non-blocking fashion if that makes sense...

Ruby HTTP server without networking

I am trying to add an HTTP server to an existing Ruby application. The application is based around a select loop, and I want to handle incoming HTTP requests there too (it is important to process the requests in the same thread, or I have to jump through hoops to marshal them there).
Ruby has plenty of solutions for standalone HTTP servers, but I can't seem to find a library which implements an HTTP server on an existing socket. I don't want the HTTP library to open a port and wait, I want to feed it sockets.
The basic logic I'm looking for is this:
handler = SomeHTTPParsingLibrary.new
# set up handler callbacks, etc on handler...
while socket = get_incoming_connection()
handler.handle_request(socket)
end
Are there any existing Ruby libraries that can work like this? HTTP is a simple enough protocol, but there are enough irritating details involved (I need cookies, basic auth, etc) that I'd rather not roll my own.
You may have to roll your sleeves up a bit to figure out what methods to call, but I'd suggest trying the HTTPParser class from within mongrel.
A quick glance through the code in httprequest.rb (webrick - from ruby stdlib) seems like it might suit your purpose.
A WEBrick::HTTPRequest object is able to accept a socket as an argument to its parse() method. It will then block, and return when the request object has been fully populated with the incoming HTTP request.
eg:
res = HTTPResponse.new(#config)
req = HTTPRequest.new(#config)
# some code to "select" a socket goes here
# sock is active, hand it over to the req object for reading.
req.parse(sock)
res.request_method = req.request_method
Of course, this assumes that this thread will block will the current request handling is complete.
OTOH, something like tmm1/http_parser.rb might also fit your needs, but sacrifice other things (like handling cookies) in favor of speed.

Resources