Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is my understanding roughly right below?
go can mostly detect dead lock at compile-time.
That go can use chan to minimize race condition is because only single sender or receiver goroutine can have access to any specific chan at a time.
I wouldn't say that's accurate. On the first point there aren't any compile time guarantees about dead locking, if you use a mutex poorly you will be dead locking, no compiler can prevent that. You can test for race conditions easily, but that's different.
On the second point, the channel serializes your asynchronous operations but I don't think how you state it makes much sense. A bunch of goroutines can be writing to and reading from it. It's just like a queue to put the data in, no coordination is guaranteed. You won't panic due to multiple routines reading off it or writing to it at the same time but if you have that happening Go isn't doing anything to make your program work well, you have to coordinate the routines yourself using channels.
No, the first is completely wrong and the second is at least stated unclear or strange.
According to this tutorial, it can catch some deadlocks. I've not gone though this tutorial though...
http://guzalexander.com/2013/12/06/golang-channels-tutorial.htmlenter link description here
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I need to get the Processes consuming CPU the most and over what time. Is this possible using any counter or script?
This at least gets you the info on who's using up the CPU. As to when, well that's another question entiresly.
I think you should configure a data collector set in Performance Monitor (PerfMon). You can collect the counter "\Process(*)% Processor Time". You can roll over the collector files for analysis later and hence see process performance over time.
When you look at the files later the graphs should make it easier to find the process that's consuming more CPU. I can't bang out a full tutorial at the moment, but a simple google search should turn up plenty of instructional info.
I will say the biggest challenge is configuring the schedule just right to make sure you capturing all the data you need. If that starts getting confusing there's a folder buried in Task Manager called PLA. That's for Performance Logs & Alerts. You should find a job there that correlates to your collector. It may be easier to work on the schedule there...
Thanks.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
With the arrival of combine framework, is there a need to use operation queues anymore. For example, apple uses operation queues almost all over the place in WWDC app. So if we use SwiftUI with combine(asynchronous programming), will there be a need to use Operation Queues?
Combine is just another asynchronous pattern, but doesn’t supplant operation queues (or dispatch queues). Just as GCD and operation queues happily coexist in our code bases, the same is true with Combine.
GCD is great at easy-to-write, yet still highly performant, code to dispatching tasks to various queues. So if you have something that might risk blocking the main thread, GCD makes it really easy to dispatch that to a background thread, and then dispatch some completion block back to the main thread. It also handles timers on background threads, data synchronization, highly-optimized parallelized code, etc.
Operation queues are great for higher-level tasks (especially those that are, themselves, asynchronous). You can take these pieces of work, wrap them up in discrete objects (for nice separation of responsibilities) and the operation queues manage execution, cancelation, and constrained concurrency, quite elegantly.
Combine shines at writing concise, declarative, composable, asynchronous event handling code. It excels at writing code that outlines how, for example, one’s UI should reflect some event (network task, notification, even UI updates).
This is obviously an oversimplification, but those are a few of the strengths of the various frameworks. And there is definitely overlap in these three frameworks, for sure, but each has its place.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I was testing things out and noticed that when I made Google API calls, my program would create 2 extra goroutines (went from 1 to 3 goroutines). I feel like this would lead to a problem where too many goroutines are created.
Do API calls create goroutines?
Not inherently. But many implementations of APIs of course will.
I was testing things ... my program would create 2 extra goroutines.
Why do you think this is "extra"? It's probably exactly the right number of goroutines.
I feel like this would lead to a problem where too many goroutines are created.
Don't. You are wrong to feel this way. There's absolutely nothing wrong with using goroutines--that's why they exist.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've read posts where people say certain compilers will implement recursion as loops but the hardware implements loops as recursion and vice versa. If I have a recursive function and an iterative function in my program, can someone please explain how the compiler and hardware are going to interpret each one? Please also address any performance benefits of one over the other if the choice of implementation does not clearly favor one method like using recursion for mergesort.
Ok, here is a brief answer:
1)A compiler can optimize tail recursive calls. But it is usually not a loop, but rather a stack frame reuse. However, I have never heard of any compiler that converts a loop into recursion(and I do not see any point of doing so: it would use additional stack space, likely to work slower and can lead to the change of semantics(stackoverflow instead of an infinite loop)).
2)I would say that it is not correct to speak about hardware implementing loops, because hardware itself does not implement loops. It has instructions(like conditional jumps, arithmetical operations and so on) which are used to implement loops.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The sense I get about this idiom is that it is useful because it ensures that resources are released after the object that uses them goes out of scope.
In other words, it's more about de-acquisition and de-initialisation, so why is this idiom named the way it is?
First, I should note that it's widely considered a poorly named idiom. Many people prefer SBRM, which stands for Stack Bound Resource Management. Although I (grudgingly) go along with using "RAII" simply because it's widely known and used, I do think SBRM gives a much better description of the real intent.
Second, when RAII was new, it applied as much to the acquisition as releasing of resources. In particular, at the time it was fairly common to see initialization happen in two steps. You'd first define an object, and only afterwards dynamically allocate any resources associated with that object. Many style guides advocated this, largely because at that time (before C++ had exception handling) there was no good way to deal with failure in a constructor. Therefore, the style guides often said, constructors should do only the bare minimum of work, and specifically avoid anything that was open to failure -- especially allocating resources (and a few still say things like that).
Quite a few of those already handled releasing the resources in the destructor though, so that wouldn't have been as clear a distinction from previous practice.