Why my goroutine stopped after a few moments running? - go

package main
import (
"fmt"
"runtime"
)
func main() {
runtime.GOMAXPROCS(1)
go spinner()
const n = 45
fibN := fib(n)
fmt.Printf("\rFibonacci(%d) = %d\n", n, fibN)
}
func spinner() {
for {
for _ = range `-\|/` {
}
}
}
func fib(x int) int {
fmt.Println(x)
if x < 2 {
return x
}
return fib(x-1) + fib(x-2)
}
After a few seconds, it stopped printing x in fib anymore. I deliberately use 1 processor to understand why it stops. Is the scheduler stuck and not rescheduling the goroutine running the fib anymore?

Is the scheduler stuck and not rescheduling the goroutine running the fib anymore?
Short answer: yes.
Longer answer: you aren't guaranteed that this will happen, even with just one CPU. The system could do an occasional reschedule on its own. It's just that the implementation on which you are testing this, doesn't.
As JimB notes in a comment, the kind of hard-spin-loop you have in spinner is ... unfriendly, at best. It should contain some method of releasing computing resources, such as spinning with time.After or time.NewTicker, or at the very least, call runtime.Gosched(). I imagine your original spinner was something like this, only without the time.NewTicker:
func spinner() {
t := time.NewTicker(50 * time.Millisecond)
for {
for _, c := range `-\|/` {
fmt.Print(string(c), "\b")
<-t.C
}
}
}
With the ticker in place, the spinner spins pretty fast, but the program completes (despite the non-caching fib). Of course a smarter fib helps rather more:
var fibcache = map[int]int64{}
func fib(x int) int64 {
if x < 2 {
return int64(x)
}
r, ok := fibcache[x]
if !ok {
r = fib(x-1) + fib(x-2)
fibcache[x] = r
}
return r
}
(I changed the return type to int64 for what I hope are obvious reasons.) Now we can find Fibonacci(90) = 2880067194370816120 in 0.00 seconds (rounded).

Related

Job queue where workers can add jobs, is there an elegant solution to stop the program when all workers are idle?

I find myself in a situation where I have a queue of jobs where workers can add new jobs when they are done processing one.
For illustration, in the code below, a job consists in counting up to JOB_COUNTING_TO and, randomly, 1/5 of the time a worker will add a new job to the queue.
Because my workers can add jobs to the queue, it is my understanding that I was not able to use a channel as my job queue. This is because sending to the channel is blocking and, even with a buffered channel, this code, due to its recursive nature (jobs adding jobs) could easily reach a situation where all the workers are sending to the channel and no worker is available to receive.
This is why I decided to use a shared queue protected by a mutex.
Now, I would like the program to halt when all the workers are idle. Unfortunately this cannot be spotted just by looking for when len(jobQueue) == 0 as the queue could be empty but some worker still doing their job and maybe adding a new job after that.
The solution I came up with is, I feel a bit clunky, it makes use of variables var idleWorkerCount int and var isIdle [NB_WORKERS]bool to keep track of idle workers and the code stops when idleWorkerCount == NB_WORKERS.
My question is if there is a concurrency pattern that I could use to make this logic more elegant?
Also, for some reason I don't understand the technique that I currently use (code below) becomes really inefficient when the number of Workers becomes quite big (such as 300000 workers): for the same number of jobs, the code will be > 10x slower for NB_WORKERS = 300000 vs NB_WORKERS = 3000.
Thank you very much in advance!
package main
import (
"math/rand"
"sync"
)
const NB_WORKERS = 3000
const NB_INITIAL_JOBS = 300
const JOB_COUNTING_TO = 10000000
var jobQueue []int
var mu sync.Mutex
var idleWorkerCount int
var isIdle [NB_WORKERS]bool
func doJob(workerId int) {
mu.Lock()
if len(jobQueue) == 0 {
if !isIdle[workerId] {
idleWorkerCount += 1
}
isIdle[workerId] = true
mu.Unlock()
return
}
if isIdle[workerId] {
idleWorkerCount -= 1
}
isIdle[workerId] = false
var job int
job, jobQueue = jobQueue[0], jobQueue[1:]
mu.Unlock()
for i := 0; i < job; i += 1 {
}
if rand.Intn(5) == 0 {
mu.Lock()
jobQueue = append(jobQueue, JOB_COUNTING_TO)
mu.Unlock()
}
}
func main() {
// Filling up the queue with initial jobs
for i := 0; i < NB_INITIAL_JOBS; i += 1 {
jobQueue = append(jobQueue, JOB_COUNTING_TO)
}
var wg sync.WaitGroup
for i := 0; i < NB_WORKERS; i += 1 {
wg.Add(1)
go func(workerId int) {
for idleWorkerCount != NB_WORKERS {
doJob(workerId)
}
wg.Done()
}(i)
}
wg.Wait()
}
Because my workers can add jobs to the queue
A re entrant channel always deadlock. This is easy to demonstrate using this code
package main
import (
"fmt"
)
func main() {
out := make(chan string)
c := make(chan string)
go func() {
for v := range c {
c <- v + " 2"
out <- v
}
}()
go func() {
c <- "hello world!" // pass OK
c <- "hello world!" // no pass, the routine is blocking at pushing to itself
}()
for v := range out {
fmt.Println(v)
}
}
While the program
tries to push at c <- v + " 2"
it can not
read at for v := range c {,
push at c <- "hello world!"
read at for v := range out {
thus, it deadlocks.
If you want to pass that situation you must overflow somewhere.
On the routines, or somewhere else.
package main
import (
"fmt"
"time"
)
func main() {
out := make(chan string)
c := make(chan string)
go func() {
for v := range c {
go func() { // use routines on the stack as a bank for the required overflow.
<-time.After(time.Second) // simulate slowliness.
c <- v + " 2"
}()
out <- v
}
}()
go func() {
for {
c <- "hello world!"
}
}()
exit := time.After(time.Second * 60)
for v := range out {
fmt.Println(v)
select {
case <-exit:
return
default:
}
}
}
But now you have a new problem.
You created a memory bomb by overflowing without limits on the stack. Technically, this is dependent on the time needed to finish a job, the memory available, the speed of your cpus and the shape of the data (they might or might not generate a new job). So there is a upper limit, but it is so hard to make sense of it, that in practice this ends up to be a bomb.
Consider not overflowing without limits on the stack.
If you dont have any arbitrary limit on hand, you can use a semaphore to cap the overflow.
https://play.golang.org/p/5JWPQiqOYKz
my bombs did not explode with a work timeout of 1s and 2s, but they took a large chunk of memory.
In another round with a modified code, it exploded
Of course, because you use if rand.Intn(5) == 0 { in your code, the problem is largely mitigated. Though, when you meet such pattern, think twice to the code.
Also, for some reason I don't understand the technique that I currently use (code below) becomes really inefficient when the number of Workers becomes quite big (such as 300000 workers): for the same number of jobs, the code will be > 10x slower for NB_WORKERS = 300000 vs NB_WORKERS = 3000.
In the big picture, you have a limited amount of cpu cycles. All those allocations and instructions, to spawn and synchronize, has to be executed too. Concurrency is not free.
Now, I would like the program to halt when all the workers are idle.
I came up with that but i find it very difficult to reason about and convince myself it wont end up in a write on closed channel panic.
The idea is to use a sync.WaitGroup to count in flight items and rely on it to properly close the input channel and finish the job.
package main
import (
"log"
"math/rand"
"sync"
"time"
)
func main() {
rand.Seed(time.Now().UnixNano())
var wg sync.WaitGroup
var wgr sync.WaitGroup
out := make(chan string)
c := make(chan string)
go func() {
for v := range c {
if rand.Intn(5) == 0 {
wgr.Add(1)
go func(v string) {
<-time.After(time.Microsecond)
c <- v + " 2"
}(v)
}
wgr.Done()
out <- v
}
close(out)
}()
var sent int
wg.Add(1)
go func() {
for i := 0; i < 300; i++ {
wgr.Add(1)
c <- "hello world!"
sent++
}
wg.Done()
}()
go func() {
wg.Wait()
wgr.Wait()
close(c)
}()
var rcv int
for v := range out {
// fmt.Println(v)
_ = v
rcv++
}
log.Println("sent", sent)
log.Println("rcv", rcv)
}
I ran it with while go run -race .; do :; done it worked fine for a reasonable amount of iterations.

GoLang - Sequential vs Concurrent

I have two versions of factorial. Concurrent vs Sequencial.
Both the program will calculate factorial of 10 "1000000" times.
Factorial Concurrent Processing
package main
import (
"fmt"
//"math/rand"
"sync"
"time"
//"runtime"
)
func main() {
start := time.Now()
printFact(fact(gen(1000000)))
fmt.Println("Current Time:", time.Now(), "Start Time:", start, "Elapsed Time:", time.Since(start))
panic("Error Stack!")
}
func gen(n int) <-chan int {
c := make(chan int)
go func() {
for i := 0; i < n; i++ {
//c <- rand.Intn(10) + 1
c <- 10
}
close(c)
}()
return c
}
func fact(in <-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
for n := range in {
wg.Add(1)
go func(n int) {
//temp := 1
//for i := n; i > 0; i-- {
// temp *= i
//}
temp := calcFact(n)
out <- temp
wg.Done()
}(n)
}
go func() {
wg.Wait()
close(out)
}()
return out
}
func printFact(in <-chan int) {
//for n := range in {
// fmt.Println("The random Factorial is:", n)
//}
var i int
for range in {
i ++
}
fmt.Println("Count:" , i)
}
func calcFact(c int) int {
if c == 0 {
return 1
} else {
return calcFact(c-1) * c
}
}
//###End of Factorial Concurrent
Factorial Sequencial Processing
package main
import (
"fmt"
//"math/rand"
"time"
"runtime"
)
func main() {
start := time.Now()
//for _, n := range factorial(gen(10000)...) {
// fmt.Println("The random Factorial is:", n)
//}
var i int
for range factorial(gen(1000000)...) {
i++
}
fmt.Println("Count:" , i)
fmt.Println("Current Time:", time.Now(), "Start Time:", start, "Elapsed Time:", time.Since(start))
}
func gen(n int) []int {
var out []int
for i := 0; i < n; i++ {
//out = append(out, rand.Intn(10)+1)
out = append(out, 10)
}
println(len(out))
return out
}
func factorial(val ...int) []int {
var out []int
for _, n := range val {
fa := calcFact(n)
out = append(out, fa)
}
return out
}
func calcFact(c int) int {
if c == 0 {
return 1
} else {
return calcFact(c-1) * c
}
}
//###End of Factorial sequential processing
My assumption was concurrent processing will be faster than sequential but sequential is executing faster than concurrent in my windows machine.
I am using 8 core/ i7 / 32 GB RAM.
I am not sure if there is something wrong in the programs or my basic understanding is correct.
p.s. - I am new to GoLang.
Concurrent version of your program will always be slow compared to the sequential version. The reason however, is related to the nature and behavior of problem you are trying to solve.
Your program is concurrent but it is not parallel. Each callFact is running in it's own goroutine but there is no division of the amount of work required to be done. Each goroutine must perform the same computation and output the same value.
It is like having a task that requires some text to be copied a hundred times. You have just one CPU (ignore the cores for now).
When you start a sequential process, you point the CPU to the original text once, and ask it to write it down a 100 times. The CPU has to manage a single task.
With goroutines, the CPU is told that there are a hundred tasks that must be done concurrently. It just so happens that they are all the same tasks. But CPU is not smart enough to know that.
So it does the same thing as above. Even though each task now is a 100 times smaller, there is still just one CPU. So the amount of work CPU has to do is still the same, except with all the added overhead of managing 100 different things at once. Hence, it looses a part of its efficiency.
To see an improvement in performance you'll need proper parallelism. A simple example would be to split the factorial input number roughly in the middle and compute 2 smaller factorials. Then combine them together:
// not an ideal solution
func main() {
ch := make(chan int)
r := 10
result := 1
go fact(r, ch)
for i := range ch {
result *= i
}
fmt.Println(result)
}
func fact(n int, ch chan int) {
p := n/2
q := p + 1
var wg sync.WaitGroup
wg.Add(2)
go func() {
ch <- factPQ(1, p)
wg.Done()
}()
go func() {
ch <- factPQ(q, n)
wg.Done()
}()
go func() {
wg.Wait()
close(ch)
}()
}
func factPQ(p, q int) int {
r := 1
for i := p; i <= q; i++ {
r *= i
}
return r
}
Working code: https://play.golang.org/p/xLHAaoly8H
Now you have two goroutines working towards the same goal and not just repeating the same calculations.
Note about CPU cores:
In your original code, the sequential version's operations are most definitely being distributed amongst various CPU cores by the runtime environment and the OS. So it still has parallelism to a degree, you just don't controll it.
The same is happening in the concurrent version but again as mentioned above, the overhead of goroutine context switching makes the performance come down.
abhink has given a good answer. I would also like to draw attention to Amdahl's Law, which should always be borne in mind when trying to use parallel processing to increase the overall speed of computation. That's not to say "don't make things parallel", but rather: be realistic about expectations and understand the parallel architecture fully.
Go allows us to write concurrent programs. This is related to trying to write faster parallel programs, but the two issues are separate. See Rob Pike's Concurrency is not Parallelism for more info.

sync.Cond Test Broadcast - why check in a loop?

I was trying to use sync.Cond - Wait and Broadcast. I could not understand some parts of it:
The comment for Wait calls says:
41 // Because c.L is not locked when Wait first resumes, the caller
42 // typically cannot assume that the condition is true when
43 // Wait returns. Instead, the caller should Wait in a loop:
44 //
45 // c.L.Lock()
46 // for !condition() {
47 // c.Wait()
48 // }
49 // ... make use of condition ...
50 // c.L.Unlock()
What is the reason this is required?
So this means the following program may not be correct (all though it works):
package main
import (
"bufio"
"fmt"
"os"
"sync"
)
type processor struct {
n int
c *sync.Cond
started chan int
}
func newProcessor(n int) *processor {
p := new(processor)
p.n = n
p.c = sync.NewCond(&sync.Mutex{})
p.started = make(chan int, n)
return p
}
func (p *processor) start() {
for i := 0; i < p.n; i++ {
go p.process(i)
}
for i := 0; i < p.n; i++ {
<-p.started
}
p.c.L.Lock()
p.c.Broadcast()
p.c.L.Unlock()
}
func (p *processor) process(f int) {
fmt.Printf("fork : %d\n", f)
p.c.L.Lock()
p.started <- f
p.c.Wait()
p.c.L.Unlock()
fmt.Printf("process: %d - out of wait\n", f)
}
func main() {
p := newProcessor(5)
p.start()
reader := bufio.NewReader(os.Stdin)
_,_ =reader.ReadString('\n')
}
Condition variables doesn't stay signaled, they only wake up other go routines that are blocking in .Wait(). So this presents a race condition unless you have a predicate where you check if you even need to wait, or if the thing you want to wait for has already happened.
In your particular case, you have added synchronization between the go routines calling .Wait() and the one calling .BroadCast() by using your p.started channel, in a manner that as far as I can tell should not present the race condition I'm describing further on in this post. Though I wouldn't bet on it, and personally I would just do it the idiomatic way like the documentation describes.
Consider your start() function is executing the broadcast in these lines:
p.c.L.Lock()
p.c.Broadcast()
At that specific point in time consider that one of your other go routines have come to this point in your process() function
fmt.Printf("fork : %d\n", f)
The next thing that go routine will do is lock the mutex (which it isn't going to own at least until the go routine in start() releases that mutex) and wait on the condition variable.
p.c.L.Lock()
p.started <- f
p.c.Wait()
But Wait is never going to return, since at this point there's noone that will signal/broadcast it - the signal has already happened.
So you need another condition that you can test yourself so you don't need to call Wait() when you already know the condition has happened, e.g.
type processor struct {
n int
c *sync.Cond
started chan int
done bool //added
}
...
func (p *processor) start() {
for i := 0; i < p.n; i++ {
go p.process(i)
}
for i := 0; i < p.n; i++ {
<-p.started
}
p.c.L.Lock()
p.done = true //added
p.c.Broadcast()
p.c.L.Unlock()
}
func (p *processor) process(f int) {
fmt.Printf("fork : %d\n", f)
p.c.L.Lock()
p.started <- f
for !p.done { //added
p.c.Wait()
}
p.c.L.Unlock()
fmt.Printf("process: %d - out of wait\n", f)
}

Parallel processing in golang

Given the following code:
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
for i := 0; i < 3; i++ {
go f(i)
}
// prevent main from exiting immediately
var input string
fmt.Scanln(&input)
}
func f(n int) {
for i := 0; i < 10; i++ {
dowork(n, i)
amt := time.Duration(rand.Intn(250))
time.Sleep(time.Millisecond * amt)
}
}
func dowork(goroutine, loopindex int) {
// simulate work
time.Sleep(time.Second * time.Duration(5))
fmt.Printf("gr[%d]: i=%d\n", goroutine, loopindex)
}
Can i assume that the 'dowork' function will be executed in parallel?
Is this a correct way of achieving parallelism or is it better to use channels and separate 'dowork' workers for each goroutine?
Regarding GOMAXPROCS, you can find this in Go 1.5's release docs:
By default, Go programs run with GOMAXPROCS set to the number of cores available; in prior releases it defaulted to 1.
Regarding preventing the main function from exiting immediately, you could leverage WaitGroup's Wait function.
I wrote this utility function to help parallelize a group of functions:
import "sync"
// Parallelize parallelizes the function calls
func Parallelize(functions ...func()) {
var waitGroup sync.WaitGroup
waitGroup.Add(len(functions))
defer waitGroup.Wait()
for _, function := range functions {
go func(copy func()) {
defer waitGroup.Done()
copy()
}(function)
}
}
So in your case, we could do this
func1 := func() {
f(0)
}
func2 = func() {
f(1)
}
func3 = func() {
f(2)
}
Parallelize(func1, func2, func3)
If you wanted to use the Parallelize function, you can find it here https://github.com/shomali11/util
This answer is outdated. Please see this answer instead.
Your code will run concurrently, but not in parallel. You can make it run in parallel by setting GOMAXPROCS.
It's not clear exactly what you're trying to accomplish here, but it looks like a perfectly valid way of achieving concurrency to me.
f() will be executed concurrently but many dowork() will be executed sequentially within each f(). Waiting on stdin is also not the right way to ensure that your routines finished execution. You must spin up a channel that each f() pushes a true on when the f() finishes.
At the end of the main() you must wait for n number of true's on the channel. n being the number of f() that you have spun up.
This helped me when I was starting out.
package main
import "fmt"
func put(number chan<- int, count int) {
i := 0
for ; i <= (5 * count); i++ {
number <- i
}
number <- -1
}
func subs(number chan<- int) {
i := 10
for ; i <= 19; i++ {
number <- i
}
}
func main() {
channel1 := make(chan int)
channel2 := make(chan int)
done := 0
sum := 0
go subs(channel2)
go put(channel1, 1)
go put(channel1, 2)
go put(channel1, 3)
go put(channel1, 4)
go put(channel1, 5)
for done != 5 {
select {
case elem := <-channel1:
if elem < 0 {
done++
} else {
sum += elem
fmt.Println(sum)
}
case sub := <-channel2:
sum -= sub
fmt.Printf("atimta : %d\n", sub)
fmt.Println(sum)
}
}
close(channel1)
close(channel2)
}
"Conventional cluster-based systems (such as supercomputers) employ parallel execution between processors using MPI. MPI is a communication interface between processes that execute in operating system instances on different processors; it doesn't support other process operations such as scheduling. (At the risk of complicating things further, because MPI processes are executed by operating systems, a single processor can run multiple MPI processes and/or a single MPI process can also execute multiple threads!)"
You can add a loop at the end, to block until the jobs are done:
package main
import "time"
func f(n int, b chan bool) {
println(n)
time.Sleep(time.Second)
b <- true
}
func main() {
b := make(chan bool, 9)
for n := cap(b); n > 0; n-- {
go f(n, b)
}
for <-b {
if len(b) == 0 { break }
}
}

Python-style generators in Go

I'm currently working through the Tour of Go, and I thought that goroutines have been used similarly to Python generators, particularly with Question 66. I thought 66 looked complex, so I rewrote it to this:
package main
import "fmt"
func fibonacci(c chan int) {
x, y := 1, 1
for {
c <- x
x, y = y, x + y
}
}
func main() {
c := make(chan int)
go fibonacci(c)
for i := 0; i < 10; i++ {
fmt.Println(<-c)
}
}
This seems to work. A couple of questions:
If I turn up the buffer size on the channel to say, 10, fibonacci would fill up 10 further spots, as quickly as possible, and main would eat up the spots as quickly as it could go. Is this right? This would be more performant than a buffer size of 1 at the expense of memory, correct?
As the channel doesn't get closed by the fibonacci sender, what happens memory-wise when we go out of scope here? My expectation is that once c and go fibonacci is out of scope, the channel and everything on it gets garbage-collected. My gut tells me this is probably not what happens.
Yes, increasing the buffer size might drastically increase the execution speed of your program, because it will reduce the number of context switches. Goroutines aren't garbage-collected, but channels are. In your example, the fibonacci goroutine will run forever (waiting for another goroutine to read from the channel c), and the channel c will never be destroyed, because the fib-goroutine is still using it.
Here is another, sightly different program, which does not leak memory and is imho more similar to Python's generators:
package main
import "fmt"
func fib(n int) chan int {
c := make(chan int)
go func() {
x, y := 0, 1
for i := 0; i <= n; i++ {
c <- x
x, y = y, x+y
}
close(c)
}()
return c
}
func main() {
for i := range fib(10) {
fmt.Println(i)
}
}
Alternatively, if you do not know how many Fibonacci numbers you want to generate, you have to use another quit channel so that you can send the generator goroutine a signal when it should stop. This is whats explained in golang's tutorial https://tour.golang.org/concurrency/4.
I like #tux21b's answer; having the channel created in the fib() function makes the calling code nice and clean. To elaborate a bit, you only need a separate 'quit' channel if there's no way to tell the function when to stop when you call it. If you only ever care about "numbers up to X", you can do this:
package main
import "fmt"
func fib(n int) chan int {
c := make(chan int)
go func() {
x, y := 0, 1
for x < n {
c <- x
x, y = y, x+y
}
close(c)
}()
return c
}
func main() {
// Print the Fibonacci numbers less than 500
for i := range fib(500) {
fmt.Println(i)
}
}
If you want the ability to do either, this is a little sloppy, but I personally like it better than testing the condition in the caller and then signalling a quit through a separate channel:
func fib(wanted func (int, int) bool) chan int {
c := make(chan int)
go func() {
x, y := 0, 1
for i := 0; wanted(i, x); i++{
c <- x
x, y = y, x+y
}
close(c)
}()
return c
}
func main() {
// Print the first 10 Fibonacci numbers
for n := range fib(func(i, x int) bool { return i < 10 }) {
fmt.Println(n)
}
// Print the Fibonacci numbers less than 500
for n := range fib(func(i, x int) bool { return x < 500 }) {
fmt.Println(n)
}
}
I think it just depends on the particulars of a given situation whether you:
Tell the generator when to stop when you create it by
Passing an explicit number of values to generate
Passing a goal value
Passing a function that determines whether to keep going
Give the generator a 'quit' channel, test the values yourself, and tell it to quit when appropriate.
To wrap up and actually answer your questions:
Increasing the channel size would help performance due to fewer context switches. In this trivial example, neither performance nor memory consumption are going to be an issue, but in other situations, buffering the channel is often a very good idea. The memory used by make (chan int, 100) hardly seems significant in most cases, but it could easily make a big performance difference.
You have an infinite loop in your fibonacci function, so the goroutine running it will run (block on c <- x, in this case) forever. The fact that (once c goes out of scope in the caller) you won't ever again read from the channel you share with it doesn't change that. And as #tux21b pointed out, the channel will never be garbage collected since it's still in use. This has nothing to do with closing the channel (the purpose of which is to let the receiving end of the channel know that no more values will be coming) and everything to do with not returning from your function.
You could use closures to simulate a generator. Here is the example from golang.org.
package main
import "fmt"
// fib returns a function that returns
// successive Fibonacci numbers.
func fib() func() int {
a, b := 0, 1
return func() int {
a, b = b, a+b
return a
}
}
func main() {
f := fib()
// Function calls are evaluated left-to-right.
fmt.Println(f(), f(), f(), f(), f())
}
Using channels to emulate Python generators kind of works, but they introduce concurrency where none is needed, and it adds more complication than's probably needed. Here, just keeping the state explicitly is easier to understand, shorter, and almost certainly more efficient. It makes all your questions about buffer sizes and garbage collection moot.
type fibState struct {
x, y int
}
func (f *fibState) Pop() int {
result := f.x
f.x, f.y = f.y, f.x + f.y
return result
}
func main() {
fs := &fibState{1, 1}
for i := 0; i < 10; i++ {
fmt.Println(fs.Pop())
}
}

Resources