I've been playing around with Netflix OSS Hystrix and I am now exploring different configurations and possibilities to include it in my project. Among other things, my application needs to do network calls in HystrixCommand.getFallBack() ...
Now, I read that it is best-practice NOT to do network calls there and instead provide some generic answer (see Hystrix Wiki) and if it is really necessary to do this, one should use HystrixCommand or HystrixObservableCommand.
My question is, if I use HystrixCommand should I invoke it e.g., with HystrixCommand.run() or HysrixCommand.queue() or some other option?
Also, in my logs I've noticed that getFallBack() can have different calling threads (e.g., Hystrix-Timer, I guess this depends who interrupted the run method). Here I would like to know how shell calling HystrixCommand.run() from the fallback affect my performance, since the calling thread will be alive and blocked until that command finishes?
EDIT: With the fresh eyes on the problem, I am now thinking that the "generic answer" (mentioned above) could be some form of Promise i.e., CompletableFuture<T> in Java terminology. So returning the promise from the HystrixCommand.run() would allow the calling thread (Hystrix internal) to return immediately, thus releasing it. However now I am stuck on implementing this behavior. Any ideas?
Thanks a lot for any help!
Use a HystrixCommand's execute method. Example:
#Override
protected YourReturnType getFallback() {
return new MyHystrixFallbackCommand().execute();
}
If you want to work with async "promises" then you should probably implement a HystrixObservableCommand.
Related
I have a Rest API that is implemented using Spring WebFlux and Kotlin with an endpoint that is used to start a long running computation. As it's not really elegant to just let the caller wait until the computation is done, it should immediately return an ID which the caller can use to fetch the result on a different endpoint once it's available. The computation is started in the background and should just complete whenever it's ready - I don't really care about when exactly it's done, as it's the caller's job to poll for it.
As I'm using Kotlin, I thought the canonical way of solving this is using Coroutines. Here's a minimal example of how my implementation looks like (using Spring's Kotlin DSL instead of traditional controllers):
import org.springframework.web.reactive.function.server.coRouter
// ...
fun route() = coRouter {
POST("/big-computation") { request: ServerRequest ->
val params = request.awaitBody<LongRunningComputationParams>()
val runId = GlobalResultStorage.prepareRun(params);
coroutineScope {
launch(Dispatchers.Default) {
GlobalResultStorage.addResult(runId, longRunningComputation(params))
}
}
ok().bodyValueAndAwait(runId)
}
}
This doesn't do what I want, though, as the outer Coroutine (the block after POST("/big-computation")) waits until its inner Coroutine has finished executing, thus only returning runId after it's no longer needed in the first place.
The only possible way I could find to make this work is by using GlobalScope.launch, which spawns a Coroutine without a parent that awaits its result, but I read everywhere that you are strongly discouraged from using it. Just to be clear, the code that works would look like this:
POST("/big-computation") { request: ServerRequest ->
val params = request.awaitBody<LongRunningComputationParams>()
val runId = GlobalResultStorage.prepareRun(params);
GlobalScope.launch {
GlobalResultStorage.addResult(runId, longRunningComputation(params))
}
ok().bodyValueAndAwait(runId)
}
Am I missing something painfully obvious that would make my example work using proper structured concurrency or is this really a legitimate use case for GlobalScope? Is there maybe a way to launch the Coroutine of the long running computation in a scope that is not attached to the one it's launched from? The only idea I could come up with is to launch both the computation and the request handler from the same coroutineScope, but because the computation depends on the request handler, I don't see how this would be possible.
Thanks a lot in advance!
Maybe others won't agree with me, but I think this whole aversion to GlobalScope is a little exaggerated. I often have an impression that some people don't really understand what is the problem with GlobalScope and they replace it with solutions that share similar drawbacks or are effectively the same. But well, at least they don't use evil GlobalScope anymore...
Don't get me wrong: GlobalScope is bad. Especially because it is just too easy to use, so it is tempting to overuse it. But there are many cases when we don't really care about its disadvantages.
Main goals of structured concurrency are:
Automatically wait for subtasks, so we don't accidentally go ahead before our subtasks finish.
Cancelling of individual jobs.
Cancelling/shutting down of the service/component that schedules background tasks.
Propagating of failures between asynchronous tasks.
These features are critical for providing a reliable concurrent applications, but there are surprisingly many cases when none of them really matter. Let's get your example: if your request handler works for the whole time of the application, then you don't need both waiting for subtasks and shutting down features. You don't want to propagate failures. Cancelling of individual subtasks is not really applicable here, because no matter if we use GlobalScope or "proper" solutions, we do this exactly the same - by storing task's Job somewhere.
Therefore, I would say that the main reasons why GlobalScope is discouraged, do not apply to your case.
Having said that, I still think it may be worth implementing the solution that is usually suggested as a proper replacement for GlobalScope. Just create a property with your own CoroutineScope and use it to launch coroutines:
private val scope = CoroutineScope(Dispatchers.Default)
fun route() = coRouter {
POST("/big-computation") { request: ServerRequest ->
...
scope.launch {
GlobalResultStorage.addResult(runId, longRunningComputation(params))
}
...
}
}
You won't get too much from it. It won't help you with leaking of resources, it won't make your code more reliable or something. But at least it will help keep background tasks somehow categorized. It will be technically possible to determine who is the owner of background tasks. You can easily configure all background tasks in one place, for example provide CoroutineName or switch to another thread pool. You can count how many active subtasks you have at the moment. It will make easier to add graceful shutdown should you need it. And so on.
But most importantly: it is cheap to implement. You won't get too much, but it won't cost you much neither, so why not.
is there a way how to get list of all registered consumers? Right now I need to write a test, that make sure, that all required queues has registered consumers. That is unfortunately impossible or at least I don't know a way.
My test should cover that scenario, when you forget to register one and it's quite hard to find out, unless you're using request/response scenario, when it throw a timeout exception.
And Moq is not helping that much, because mostly it is generic methods (like AddConsumer) with constraints which is quite a problem when verifying certain method was called with specific parameters. I can replace it with calling non-generic, but that I would like to use as last resort.
Thank you
After the container is configured, you could use the IServiceCollection directly to check if registrations exist.
Assert.That(services.Any(x => x.ServiceType == typeof(SomeConsumer));
Do that for each consumer.
I usually see the use of either promise and future in the start of a vert.x verticle. Is there any specific difference between both?
I have read about their differences in Scala language, is it the same in case of Vert.x too?
Also when should I know when to use promise or a future?
The best I've read about:
think on Promise as producer (used by producer on one side of async operation) and Future as consumer (used by consumer on the other side).
Futures vs. Promises
Promise are for defining non-blocking operations, and it's future() method returns the Future associated with a promise, to get notified of the promise completion and retrieve its value. The Future interface is the result of an action that may, or may not, have occurred yet.
A bit late to the game, and the other answers say as much in different words, however this might help. Lets say you were wrapping some older API (e.g. callback based) to use Futures, then you might do something like this :
Future<String> getStringFromLegacyCallbackAPI() {
Promise<String> promise = Promise.promise();
legacyApi.getString(promise::complete);
return promise.future();
}
Note that the person who calls this method gets a Future so they can only specify what should happen in the event of successful completion or failure (they cannot trigger completion or failure). So, I think you should not pass the promise up the stack - rather the Future should be handed back and the Promise should be kept under the control of the code which can resolve or reject it.
A Promise as well is a writable side of an action that may or not have occurred yet.
And according to the wiki :
Given the new Promise / Future APIs the start(Future<Void>) and stop(Future<Void>) methods have been deprecated and will be removed in Vert.x 4.
Please migrate to the start(Promise) and stop(Promise) variants.
As a paraphrase,
A future is a read-only container for a result that does not yet exist, while a promise can be written (normally only once).
More from here
I would like to run specific long-running functions (which execute database queries) on a separate thread. However, let's assume that the underlying database engine only allows one connection at a time and the connection struct isn't Sync (I think at least the latter is true for diesel).
My solution would be to have a single separate thread (as opposed to a thread pool) where all the database-work happens and which runs as long as the main thread is alive.
I think I know how I would have to do this with passing messages over channels, but that requires quite some boilerplate code (e.g. explicitly sending the function arguments over the channel etc.).
Is there a more direct way of achieving something like this with rust (and possibly tokio and the new async/await notation that is in nightly)?
I'm hoping to do something along the lines of:
let handle = spawn_thread_with_runtime(...);
let future = run_on_thread!(handle, query_function, argument1, argument2);
where query_function would be a function that immediately returns a future and does the work on the other thread.
Rust nightly and external crates / macros would be ok.
If external crates are an option, I'd consider taking a look at actix, an Actor Framework for Rust.
This will let you spawn an Actor in a separate thread that effectively owns the connection to the DB. It can then listen for messages, execute work/queries based on those messages, and return either sync results or futures.
It takes care of most of the boilerplate for message passing, spawning, etc. at a higher level.
There's also a Diesel example in the actix documentation, which sounds quite close to the use case you had in mind.
In Go, if we have a type with a method that starts some looped mechanism (polling A and doing B forever) is it best to express this as:
// Run does stuff, you probably want to run this as a goroutine
func (t Type) Run() {
// Do long-running stuff
}
and document that this probably wants to be launched as a goroutine (and let the caller deal with that)
Or to hide this from the caller:
// Run does stuff concurrently
func (t Type) Run() {
go DoRunStuff()
}
I'm new to Go and unsure if convention says let the caller prefix with 'go' or do it for them when the code is designed to run async.
My current view is that we should document and give the caller a choice. My thinking is that in Go the concurrency isn't actually part of the exposed interface, but a property of using it. Is this right?
I had your opinion on this until I started writing an adapter for a web service that I want to make concurrent. I have a go routine that must be started to parse results that are returned to the channel from the web calls. There is absolutely no case in which this API would work without using it as a go routine.
I then began to look at packages like net/http. There is mandatory concurrency within that package. It is documented at the interface level that it should be able to be used concurrently, however the default implementations automatically use go routines.
Because Go's standard library commonly fires of go routines within its own packages, I think that if your package or API warrants it, you can handle them on your own.
My current view is that we should document and give the caller a choice.
I tend to agree with you.
Since Go makes it so easy to run code concurrently, you should try to avoid concurrency in your API (which forces clients to use it concurrently). Instead, create a synchronous API, and then clients have the option to run it synchronously or concurrently.
This was discussed in a talk a couple years ago: Twelve Go Best Practices
Slide 26, in particular, shows code more like your first example.
I would view the net/http package as an exception because in this case, the concurrency is almost mandatory. If the package didn't use concurrency internally, the client code would almost certainly have to. For example, http.Client doesn't (to my knowledge) start any goroutines. It is only the server that does so.
In most cases, it's going to be one line of the code for the caller either way:
go Run() or StartGoroutine()
The synchronous API is no harder to use concurrently and gives the caller more options.
There is no 'right' answer because circumstances differ.
Obviously there are cases where an API might contain utilities, simple algorithms, data collections etc that would look odd if packaged up as goroutines.
Conversely, there are cases where it is natural to expect 'under-the-hood' concurrency, such as a rich IO library (http server being the obvious example).
For a more extreme case, consider you were to produce a library of plug-n-play concurrent services. Such an API consists of modules each having a well-described interface via channels. Clearly, in this case it would inevitably involve goroutines starting as part of the API.
One clue might well be the presence or absence of channels in the function parameters. But I would expect clear documentation of what to expect either way.