My webserver is written in an AppM :: ReaderT Env IO monad. The Env type, for the purpose of centralized auditing code, needs to know the request's path. Therefore, my hoistServer usage looks something like this:
\req respHandler ->
let application = serve apiProxy $ hoistServer apiProxy (\appm -> runAppM req appm) myServer
in application req respHandler
Notice how the second argument to hoistServer depends on the incoming req.
What are the performance implications of doing something like this? Is the servant application going to get "re-created" for every incoming request?
Is there any other way to possibly write this code to make it more performant?
Related
This may be a question about coroutines in general, but in my ktor server (netty engine, default configuration) application I perform serveral asyncronous calls to a database and api endpoint and want to make sure I am using coroutines efficiently. My question are as follows:
Is there a tool or method to work out if my code is using coroutines effectively, or do I just need to use curl to spam my endpoint and measure the performance of moving processes to another context e.g. compute?
I don't want to start moving tasks/jobs to another context 'just in case' but should I treat the default coroutine context in my Route.route() similar to the Android main thread and perform the minimum amount of work on it?
Here is an rough example of the code that I'm using:
fun Route.route() {
get("/") {
call.respondText(getRemoteText())
}
}
suspend fun getRemoteText() : String? {
return suspendCoroutine { cont ->
val document = 3rdPartyLibrary.get()
if (success) {
cont.resume(data)
} else {
cont.resume(null)
}
}
}
You could use something like Apache Jmeter, but writing a script and spamming your server with curl seems also a good option to me
Coroutines are pretty efficient when it comes to context/thread switching, and with Dispatchers.Default and Dispatchers.IO you'll get a thread-pool. There are a couple of documentations around this, but I think you can definitely leverage these Dispatchers for heavy operations
There are few tools for testing endpoints. Jmeter is good, there are also command line tools like wrk, wrk2 and siege.
Of course context switching costs. The coroutine in routing is safe to run blocking operations unless you have the option shareWorkGroup set. However, usually it's good to use a separate thread pool because you can control it's size (max threads number) to not get you database down.
I am new to grpc and trying to implement a service which should have a provision to take parameters similar to following:
param 1
param 2
large file 1 - preferably stream
large file 2 - preferably stream
From my understanding so far, it cannot be done as a single rpc method. I have to split into different methods - say one unary method to take in param 1 and param 2 and another method that takes in a file as a stream. But then I have to maintain state across two rpc calls because logically it is one call from a client perspective. Is that the right way to implement this scenario? Is there a better solution?
It may be helpful for you to provide more details as to what you're trying to do.
It's sometimes useful to model gRPC as if there were no remote (no network) and you're invoking any-old methods|functions.
Let's assume you're doing a file upload:
fn uploader(name: string, content: []byte) -> bool {}
This is a (synchronous) unary method that takes the name of the file and some (arbitrarily-sized) content.
Its implementation may be:
fn uploader(name: string, content: []byte) -> bool {
content.as_chunks()
.map(|c| upload_chunk(name, c.start, c.data))
.reduce(|resp| ... )
}
NOTE upload_chunk closes over name because this method needs to track which file(name) is being uploaded.
The implementation also uses a synchronous unary method (upload_chunk). Each time it's called, we await the reply before proceeding. We need to do something with all the replies too.
An alternative (and potentially more efficient) implementation is to upload_chunk asynchronously; we don't await on each reply. Conventionally, we would gather futures from each upload_chunk and manage them. Alternatively, we could delegate this to some other mechanism to do it for us. We create some channel|stream, we pump data onto it and then we close it:
fn uploader(name: string, content: []byte) -> bool {
let s = Stream::new()
content.as_chunks().range(|c| s.upload_chunk(name, c.start, c.data))
s.close()
}
NOTE I'm assuming s.chunk() is non-blocking and that s.close() returns success or failure.
Hopefully this shows that, while the client may think it's making one call (uploader), whichever approach you take will require potentially multiple calls and, whichever approach you take unary or streaming or both, can be implemented using gRPC.
I'm building an event collector, it will receive a http request like http://collector.me/?uuid=abc123&product=D3F4&metric=view then write request parameters to Apache Kafka topic, now I use Plug, Cowboy and KafkaEx.
defmodule Collector.Router do
import Plug.Conn
def init(opts) do
opts
end
def call(conn, _opts) do
conn = fetch_query_params(conn)
KafkaEx.produce("test", 0, "#{inspect conn.query_params}")
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "OK")
end
end
AFAIK, Cowboy spawns a new process for each request, so I think write to Kafka in the call function is a proper way because it's easy to create hundreds of thousands of processes in Elixir. But I wonder if this is the right way to do? Do I need a queue before write to Kafka or something like that? My goal is handle as much concurrent requests as possible.
Thanks.
Consider using the Confluent Kafka REST Proxy because then you might not need to write any server side code.
https://github.com/confluentinc/kafka-rest
Worst case is you might need to rewrite the incoming URL into a properly formatted HTTP POST with JSON data and the right HTTP header for Content-Type. This can be done with and application load balancer or a basic reverse Proxy like haproxy or nginx.
How can I achieve MDC Logging (Java) in GoLang?
I need to add UUIDs in all server logs in order to be able to trace concurrent requests.
Java MDC relies on thread local storage, something Go does not have.
The closest thing is to thread a Context through your stack.
This is what more and more libraries are doing in Go.
A somewhat typical way is to do this via a middleware package that adds a request id to the context of a web request, like:
req = req.WithContext(context.WithValue(req.Context(),"requestId",ID))
Then, assuming you pass the context around, you pull it out with ctx.Value("requestId") and use it wherever it makes sense.
Possibly making your own custom logger function like:
func logStuff(ctx context.Context, msg string) {
log.Println(ctx.Value("requestId"),msg) // call stdlib logger
}
There's a bunch of ways you may want to handle this, but that's a fairly simple form.
I have some confusion about how to design an asynchronous part of a web app. My setup is simple; a visitor uploads a file, a bunch of computation is done on the file, and the results are returned. Right now I'm doing this all in one request. There is no user model and the file is not stored on disk.
I'd like to change it so that the results are delivered in two parts. The first part comes back with the request response because it's fast. The second part might be heavy computation and a lot of data, so I want it to load asynchronously, whenever it's done. What's a good way to do this?
Here are some things I do know about this. Usually, asynchronicity is done with ajax requests. The request will be to some route, let's say /results. In my controller, there'll be a method written to respond to /results. But this method will no longer have any information from the previous request, because HTTP is stateless. To get around this, people pass info through the request. I could either pass all the data through the request, or I could pass an id which the controller would use to look up the data somewhere else.
My data is a big python object (a pandas DataFrame). I don't want to pass it through the network. If I use an id, the controller will have to look it up somewhere. I'd rather not spin up a database just for these short durations, and I'd also rather not convert it out of python and write to disk. How else can I give the ajax request access to the python object across requests?
My only idea so far is to have the initial request trigger my framework to render a second route, /uuid/slow_results. This would be served until the ajax request hits it. I think this would work, but it feels pretty ad hoc and unnatural.
Is this a reasonable solution? Is there another method I don't know? Or should I bite the bullet and use one of the aforementioned solutions?
(I'm using the web framework Flask, though this question is probably framework agnostic.
PS: I'm trying to get better at writing SO questions, so let me know how to improve it.)
So if your app is only being served by one Python process, you could just have a global object that's a map from ids to DataFrames, but you'd also need some way of expiring them out of the map so you don't leak memory.
So if your app is running on multiple machines, you're screwed. If your app is just running on one machine, it might be sitting behind apache or something and then apache might spawn multiple Python processes and you'd still be screwed? I think you'd find out by doing ps aux and counting instances of python.
Serializing to a temporary file or database are fine choices in general, but if you don't like either in this case and don't want to set up e.g. Celery just for this one thing, then multiprocessing.connection is probably the tool for the job. Copying and lightly modifying from here, the box running your webserver (or another, if you want) would have another process that runs this:
from multiprocessing.connection import Listener
import traceback
RESULTS = dict()
def do_thing(data):
return "your stuff"
def worker_client(conn):
try:
while True:
msg = conn.recv()
if msg['type'] == 'answer': # request for calculated result
answer = RESULTS.get(msg['id'])
conn.send(answer)
if answer:
del RESULTS[msg['id']]
else:
conn.send("doing thing on {}".format(msg['id']))
RESULTS[msg['id']] = do_thing(msg)
except EOFError:
print('Connection closed')
def job_server(address, authkey):
serv = Listener(address, authkey=authkey)
while True:
try:
client = serv.accept()
worker_client(client)
except Exception:
traceback.print_exc()
if __name__ == '__main__':
job_server(('', 25000), authkey=b'Alex Altair')
and then your web app would include:
from multiprocessing.connection import Client
client = Client(('localhost', 25000), authkey=b'Alex Altair')
def respond(request):
client.send(request)
return client.recv()
Design could probably be improved but that's the basic idea.