How are coroutines implemented? - scheme

I have a question about coroutine implementation.
I saw coroutine first on Lua and stackless-python. I could understand the concept of it, and how to use yield keyword, but I cannot figure out how it is implemented.
Can I get some explanation about them?

Coroutining is initiated by pushing the target address, then each coroutine switch exchanges the current PC with the top of the stack, which eventually has to get popped to terminate the coroutining.

See also: Implementing “Generator” support in a custom language. Generators are basically a limited form of (semi-)coroutines, most of what is discussed in that question applies here as well.
Also: How are exceptions implemented under the hood? While exceptions are obviously very different from coroutines, they both have something in common: both are advanced universal control flow constructs. (In fact, you can implement coroutines using exceptions and exceptions using coroutines.)

Related

What is the difference between channelFlow and callbackFlow

I am trying to understand why we need callbackFlow builder, it seems almost same with channelFlow except callbackFlow is inline. What is the use case ?
They do exactly the same thing. One of them literally calls the other. The difference is in the intention. It is supposed to make your code more self documenting about your intentions.
Use callback flow for callbacks and channelFlow for concurrent flow emission.
EDIT:
As of Version 1.3.4, callbackFlow will detect missing calls to awaitClose, making it less error prone.
So they are now different.

Is there parallelism in Elm?

It's possible to write parallel code in Elm? Elm is pure functional, so no locking is needed. Of course, I can use Javascript FFI, spawn workers here and do it on my own. But, I want more user friendly "way" of doing this.
Short answer
No, not currently. But the next release (0.15) release will have new ways to handle effects inside Elm so you will need to use ports + JavaScript code less. So there may well be a way to spawn workers inside Elm in the next version.
More background
If you're feeling adventurous, try reading the published paper on Elm (or the longer original thesis), which shows that the original flavour of FRP that Elm uses is well suited for fine-grained concurrency. There is also an async construct which can potentially make part of the program run separately in a more coarse-grained manner. That might be support with OS-level threads (like JS Webworkers) and parallelism.
There have been earlier experiments with Webworkers. There is certainly an interest in concurrency within the community, but JavaScript doesn't offer (m)any great options for concurrency.
For reading tips on the paper, here's post of mine from the elm-discuss mailing list:
If you want to know more about signals and opt-in async, I suggest you try Evan's PLDI paper on Elm. Read from the introduction (1) up to building GUIs (4). You can skip the type system (3.2) and functional evaluation (3.3.1), that may save you some time. Most in and after building GUIs (4) is probably stuff you know already. Figure 8 is probably the best overview of what the async keyword does (note that the async keyword is not implemented in the current Elm compiler).

How to interface blocking and non-blocking code with asyncio

I'm trying to use a coroutine function outside of the event loop. (In this case, I want to call a function in Django that could also be used inside the event loop too)
There doesn't seem to be a way to do this without making the calling function a coroutine.
I realize that Django is built to be blocking and a therefore incompatible with asyncio. Though I think that this question might help people who are making the transition or using legacy code.
For that matter, it might help to understand async programming and why it doesn't work with blocking code.
After a lot of research I think these solutions could be helpful:
Update your legacy code to use asyncio:
Yes i know that it can be hard and painful, but it might be the sanest choice. If you are wanting to use Django like I was... Well, you've got a lot of work to do to make Django async'd. I'm not sure it is possible, but I found at least one attempt: https://github.com/aaugustin/django-c10k-demo (Though, in a youtube video the author explained all the shortcomings of this).
use asyncio.async or asyncio.Task:
These items will enable you to run something async inside of blocking code, but the downfall here is that you will not be able to wait for them to finish without doing something ugly like a while loop that checks if the future has completed... ugh, but if you don't need the result, that might work for you.
About case #2: Blocking code should be at least wrapped with .run_in_executor.

How to properly use Golang packages in the standard library or third-party with Goroutines?

Hi Golang programmers,
First of all I apologize if my question is not very clear initially but I'm trying to understand the proper usage pattern when writing Golang code that uses Goroutines when using the standard lib or other libraries.
Let me elaborate: Suppose I import some package that I didn't have a hand in writing that I want to utilize. Let's say this package does a simple http get request somehow to a website such as Flickr for example. If I want a concurrent request, I can just prefix the function call with the go keyword. But how do I know, that this package when doing the request doesn't already do some internal go calls itself therefore making my go calls redundant?
Do Golang packages typically say in the documentation that their method is "greened"? Or perhaps they provide two versions of a method, one that is green and one that is straight synchronous?
In my quest to understand Go idioms and usage patterns I feel like when using even packages in the standard lib that I can't be sure if my go commands are necessary. I suppose I can profile the calls, or write test code but that feels odd to have to figure out if a func is already "green".
I suppose another possibility is that it's up to me to study the source code of whatever I'm using and understand how it should be used and if the go keyword is necessary.
If anybody can shed some light on this or point me to the right documentation or even a Golang screen-cast I'd much appreciate it. I think Rob Pike briefly mentions in one talk that a good client api written go is just written in a typical synchronous manner and it's up to the caller of that api to have the choice of making it green or not.
Thanks for your time,
-Ralph
If a function / method returns some value(s), or have a side effect like that (io.Reader.Read) - then it's necessarily a synchronous thing. Unless documented otherwise, no safety for concurrent use by multiple goroutines should be assumed.
If it accepts a closure (callback) or a channel or if it returns a channel - then it is often an asynchronous thing. If that's the case, it's normally either obvious or explicitly documented. Asynchronous stuff like this is usually safe for concurrent use by multiple goroutines.

How do I do a For loop without freezing the GUI?

I would like to know how I could run the following loop in a way where it doesn't freeze the GUI, as the loop can take minutes to complete. Thank you.
For i = 0 To imageCount
'code
Next
The short answer is you run the loop on another thread. The long answer is a whole book and a couple of semesters at university, because it entails resource access conflicts and various ways of addressing them such as locking and queueing.
Since you appear to be using VB.NET I suggest you use the latest version of the .NET framework and take advantage of Async and Await, which you can learn about from MSDN.
These keywords implement a very sophisticated canned solution that will allow you to achieve your goals in blissful ignorance of the nightmare behind them :)
Why experienced parallel coders would bother with async/await
Standout features of async/await are
automatic temporary marshalling back to the UI thread as required
scope of exception handlers (try/catch/finally) can span both setup and callback code
you write what is conceptually linear code with blocking calls on the UI thread, but because you declare calls that block using "await", the compiler rewrites your code as a state machine makes the preceding points true
Linear code with blocking calls is easy to write and easy to read. So it's much better from a maintenance perspective. But it provides an atrocious UX. Async/await means you can have it both ways.
All this is built on TPL; in a quite real sense it's nothing more than a compiler supported design pattern for TPL, which is why methods tagged as async are required to return a Task<>. There's so much to love about this, and no technical downside that I've seen.
My only concern is that it's all too good, so a whole generation will have no idea how tall the giants on whose shoulders they perch, just as most modern programmers have only dim awareness of the mechanics of stack frames in call stacks (the magic behind local variables).
You can run the loop on a separate thread. Read about using BackgroundWorker here http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx

Resources