I'm Implementing a basic Raft Consensus Algorithm. But while electing a leader some time get multiple leader for a single Term Here is my Implementation for raft Election
//RequestVote Rpc Handler
func (rf *Raft) RequestVote(args *RequestVoteArgs, reply *RequestVoteReply) {
rf.mu.Lock()
defer rf.mu.Unlock()
reply.Term = args.Term
if rf.currentTerm >= args.Term {
reply.VoteGranted = false
return
}
if true {
rf.convertToFollower(args.Term, args.CandidateId)
DPrintf("term:%v, %v voted %v", args.Term, rf.me, args.CandidateId)
reply.VoteGranted = true
return
}
reply.VoteGranted = false
}
//AppendEntry(Heartbeat) Rpc handler
func (rf *Raft) AppendEntry(args *AppendEntryArgs, reply *AppendEntryReply) {
rf.mu.Lock()
defer rf.mu.Unlock()
reply.Term = args.Term
if args.Term < rf.currentTerm {
reply.Success = false
return
}
if true {
if (args.Term > rf.currentTerm) || (args.Term == rf.currentTerm && rf.state == candidate) {
rf.convertToFollower(args.Term, args.LeaderId)
}
rf.lastAppendEntryTime = time.Now()
reply.Success = true
return
}
reply.Success = false
}
func (rf *Raft) sendRequestVote(server int, args *RequestVoteArgs, reply *RequestVoteReply) bool {
ok := rf.peers[server].Call("Raft.RequestVote", args, reply)
return ok
}
func (rf *Raft) sendAppendEntry(server int, args *AppendEntryArgs, reply *AppendEntryReply) bool {
ok := rf.peers[server].Call("Raft.AppendEntry", args, reply)
return ok
}
//Electing a new leader
func (rf *Raft) KickStartElection() {
rf.mu.Lock()
rf.convertToCandidate()
term := rf.currentTerm
candidateId := rf.me
// lastLogIndex := rf.commitIndex
// lastLogTerm := rf.log[rf.commitIndex].Term
rf.mu.Unlock()
var mu sync.Mutex
cond := sync.NewCond(&mu)
peerDone := 1
peerLength := len(rf.peers)
majority := peerLength/2 + 1
vote := 1
var votefrom []int
// DPrintf("%v,%v:length of peers %v and majority needed %v", rf.me, rf.currentTerm, peerLength, majority)
for peer := range rf.peers {
if peer == rf.me {
continue
}
go func(peer int) {
args := RequestVoteArgs{Term: term, CandidateId: candidateId}
reply := RequestVoteReply{}
rf.sendRequestVote(peer, &args, &reply)
mu.Lock()
peerDone++
if reply.VoteGranted {
// DPrintf("term %v:%v me,%v peer", rf.currentTerm, rf.me, peer)
vote++
votefrom = append(votefrom, peer)
}
cond.Broadcast()
mu.Unlock()
}(peer)
}
mu.Lock()
for {
rf.mu.Lock()
if rf.state != candidate {
rf.mu.Unlock()
break
} else {
rf.mu.Unlock()
}
if (peerLength - peerDone) < (majority - vote) {
break
}
if vote >= majority {
// DPrintf("%v leader, term %v", rf.me, rf.currentTerm)
DPrintf("term:%v,leader:%v,%v", term, rf.me, votefrom)
rf.convertToLeader()
break
}
cond.Wait()
}
mu.Unlock()
}
func (rf *Raft) convertToCandidate() {
rf.currentTerm++
rf.state = candidate
rf.votedFor = rf.me
}
func (rf *Raft) convertToLeader() {
rf.mu.Lock()
rf.state = leader
rf.mu.Unlock()
rf.sendEntry()
}
func (rf *Raft) convertToFollower(term int, CandidateId int) {
rf.state = follower
rf.currentTerm = term
rf.votedFor = CandidateId
}
logs for Multiple leader for term Is
Test (2A): multiple elections ...
2023/02/19 00:45:07 term:22,leader:4,[5 6 2]
2023/02/19 00:45:07 term:23, 5 voted 6
2023/02/19 00:45:07 term:23, 4 voted 6
2023/02/19 00:45:07 term:23, 2 voted 6
2023/02/19 00:45:07 term:23,leader:6,[5 4 2]
2023/02/19 00:45:07 term:24, 6 voted 2
2023/02/19 00:45:07 term:24, 5 voted 2
2023/02/19 00:45:07 term:24, 4 voted 2
2023/02/19 00:45:07 term:24,leader:2,[6 5 4]
2023/02/19 00:45:07 term:25, 6 voted 4
2023/02/19 00:45:07 term:25, 2 voted 4
2023/02/19 00:45:07 term:25, 5 voted 4
2023/02/19 00:45:07 term:25,leader:4,[6 2 5]
--- FAIL: TestManyElections2A (5.52s)
config.go:456: term 25 has 2 (>1) leaders [2,4]
// Raft state at the term of 25
{me: 0, term:11 votedFor:0 state:Candidate}
{me: 1, term:14 votedFor:1 state:Candidate}
{me: 2, term:25 votedFor:4 state:Leader}
{me: 3, term:13 votedFor:3 state:Candidate}
{me: 4, term:25 votedFor:4 state:Leader}
{me: 5, term:25 votedFor:4 state:Follower}
{me: 6, term:25 votedFor:4 state:Follower}
As per the raft state prev term leader 2 voted for 4 but not change to follower but In ConvertTofollower raft first changes to follower then update it its term to next term.
Here is Link to complete code
Raft Implementation
"You may Find config.go and test_test.go helpful"
reply.Term = args.Term should be reply.Term = rf.currentTerm to let leader/candidate update their term
Related
I have the following program, I am new to gorountine, what I want to test is simple, I am calling a gorountine in a loop 100 times, it there is one time failure, the entire program fails, otherwise succeeds, and fail10Percent it delays 1 second, and check a random number if it is 4, let it fail.
package main
import (
"fmt"
"math/rand"
"time"
)
func fail10Percent(ch chan int) {
time.Sleep(1 * time.Second)
e := rand.Intn(10)
fmt.Println("Calculating rand.Intn(10) ", e)
if e == 4 {
ch <- 0
return
}
ch <- 1
}
func test() {
for i := 0; i < 100; i++ {
err := make(chan int)
go fail10Percent(err)
res := <-err
fmt.Println("=== result: ", res)
if res != 1 {
fmt.Println("failed")
return
}
}
fmt.Println("succeeded")
}
func main() {
test()
}
I expect the go fail10Percent(err) will run concurrently 100 times, which will only have 1 second delay, however, when I run it, I see the following result getting printed 1 second after 1 second, why is that, and how I can adjust my program to do what I want.
Calculating rand.Intn(10) 1
=== result: 1
Calculating rand.Intn(10) 7
=== result: 1
Calculating rand.Intn(10) 7
=== result: 1
Calculating rand.Intn(10) 9
=== result: 1
Calculating rand.Intn(10) 1
=== result: 1
Calculating rand.Intn(10) 8
=== result: 1
Calculating rand.Intn(10) 5
=== result: 1
Calculating rand.Intn(10) 0
=== result: 1
Calculating rand.Intn(10) 6
=== result: 1
Calculating rand.Intn(10) 0
=== result: 1
Calculating rand.Intn(10) 4
=== result: 0
failed
I've commented out the code for you so that you can understand.
package main
import (
"fmt"
"math/rand"
"sync"
)
func fail10Percent(ch chan int, w *sync.WaitGroup) {
defer w.Done()
num := rand.Intn(10)
fmt.Println("calculating rand.Intn(10) ", num)
if num == 4 {
ch <- 0 // Fail
return
}
ch <- 1 // Pass
}
func test() {
var ch = make(chan int, 1)
// Launch the receiver goroutine to listen if goroutine succeeded or failed based on the value sent to ch
go func() {
for recv := range ch {
switch recv {
// Fail
case 0:
fmt.Println("goroutine failed")
// Pass
case 1:
fmt.Println("goroutine succeed")
}
}
}()
// wg is a WaitGroup
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go fail10Percent(ch, &wg)
}
// wg.Wait() to wait for all goroutines to complete
wg.Wait()
// Close the channel so that the receiver can stop
close(ch)
}
func main() {
test()
}
Update:
Simple solution without using sync.WaitGroup
package main
import (
"fmt"
"math/rand"
)
// Using a send only channel
func fail10Percent(ch chan<- int) {
num := rand.Intn(10)
fmt.Println("calculating rand.Intn(10) ", num)
if num == 4 {
ch <- 0 // Fail
return
}
ch <- 1 // Pass
}
func test() {
var ch = make(chan int, 1)
for i := 0; i < 100; i++ {
go fail10Percent(ch)
}
for i := 0; i < 100; i++ {
if recv := <-ch; recv == 0 {
fmt.Println("goroutine failed")
} else if recv == 1 {
fmt.Println("goroutine succeed")
}
}
close(ch)
}
func main() {
test()
}
I am new to go, but have worked with concurrency before. I am having an issue sharing a slice between multiple goroutines not containing the same data between all of the goroutines. I use a mutex as well to lock the struct when I modify the slice, but it doesn't seem to help. I have attached my code and would like to know what I am doing wrong, thanks for any help!
type State struct {
waiting int32
processing int32
completed int32
}
type Scheduler struct {
sync.Mutex
items chan interface{}
backPressure []interface{}
capacity int
canceler context.CancelFunc
state State
}
func NewScheduler(capacity int, handler func(interface {}) (interface{}, error)) Scheduler {
ctx, cancel := context.WithCancel(context.Background())
state := State{}
atomic.StoreInt32(&state.waiting, 0)
atomic.StoreInt32(&state.processing, 0)
atomic.StoreInt32(&state.completed, 0)
scheduler := Scheduler{
items: make(chan interface{}, capacity),
backPressure: make([]interface{}, 0),
capacity: capacity,
canceler: cancel,
state: state,
}
scheduler.initializeWorkers(ctx, handler)
return scheduler
}
func (s *Scheduler) initializeWorkers(ctx context.Context, handler func(interface {}) (interface{}, error)) {
for i := 0; i < 5; i++ {
go s.newWorker(ctx, handler)
}
}
func (s *Scheduler) newWorker(ctx context.Context, handler func(interface {}) (interface{}, error)) {
backoff := 0
for {
select {
case <-ctx.Done():
return
case job := <- s.items:
atomic.AddInt32(&s.state.waiting, -1)
atomic.AddInt32(&s.state.processing, 1)
job, _ = handler(job)
backoff = 0
atomic.AddInt32(&s.state.processing, -1)
atomic.AddInt32(&s.state.completed, 1)
default:
backoff += 1
s.CheckBackPressure()
time.Sleep(time.Duration(backoff * 10) * time.Millisecond)
}
}
}
func (s *Scheduler) AddItem(item interface{}) {
atomic.AddInt32(&s.state.waiting, 1)
if len(s.items) < s.capacity {
select {
case s.items <- item:
return
}
}
s.Lock()
defer s.Unlock()
s.backPressure = append(s.backPressure, item)
fmt.Printf("new backpressure len %v \n", len(s.backPressure))
return
}
func (s *Scheduler) Process() {
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
for {
if atomic.LoadInt32(&s.state.waiting) == 0 && atomic.LoadInt32(&s.state.processing) == 0 {
return
}
runtime.Gosched()
}
}()
wg.Wait()
}
func (s *Scheduler) CheckBackPressure() {
s.Lock()
defer s.Unlock()
if len(s.backPressure) == 0 || s.capacity <= len(s.items) {
fmt.Printf("backpressure = %d :: len = %d cap = %d \n", len(s.backPressure), len(s.items), s.capacity)
return
}
fmt.Printf("releasing backpressure \n")
job, tmp := s.backPressure[0], s.backPressure[1:]
s.backPressure = tmp
s.items <- job
return
}
func (s *Scheduler) Stop() {
s.canceler()
}
This is the code that I am using to test the functionality:
type Job struct {
Value int
}
func TestSchedulerExceedingCapacity(t *testing.T) {
handler := func (ptr interface{}) (interface{}, error) {
job, ok := (ptr).(*Job)
if ok != true {
return nil, errors.New("failed to convert job")
}
// simulate work
time.Sleep(50 * time.Millisecond)
return job, nil
}
scheduler := NewScheduler(5, handler)
for i := 0; i < 25; i++ {
scheduler.AddItem(&(Job { Value: i }))
}
fmt.Printf("PROCESSING\n")
scheduler.Process()
fmt.Printf("FINISHED\n")
}
When I update the slice that holds the back pressure, it seems to indicate that it was correctly updated by printing new backpressure len 1 for 1-16.
However, when I check the back pressure from the worker, it indicates that the backpressure slice is empty. backpressure = 0 :: len = 0 cap = 5.
Also "releasing backpressure" is never printed to stdout either.
Here is some additional output...
=== RUN TestSchedulerExceedingCapacity
new backpressure len 1
new backpressure len 2
new backpressure len 3
new backpressure len 4
new backpressure len 5
new backpressure len 6
new backpressure len 7
new backpressure len 8
backpressure = 0 :: len = 0 cap = 5
new backpressure len 9
new backpressure len 10
new backpressure len 11
new backpressure len 12
new backpressure len 13
new backpressure len 14
new backpressure len 15
new backpressure len 16
PROCESSING
backpressure = 0 :: len = 0 cap = 5
backpressure = 0 :: len = 0 cap = 5
backpressure = 0 :: len = 0 cap = 5
...
If I don't kill the test it indefinitely prints backpressure = 0 :: len = 0 cap = 5
I am assuming that I am not correctly synchronizing the changes, I would REALLY appreciate any insights, thanks!
Okay I was able to figure this out of course once I posted the question...
I saw somewhere that suggested to run the test with the -race option which enables the data race detector. I immediately got errors which helped make the problem easier to debug.
It turns out that the problem was related to returning the value of NewScheduler rather than the pointer of new scheduler. I changed that function to the following code, which fixed the issue.
func NewScheduler(capacity int, handler func(interface {}) (interface{}, error)) *Scheduler {
ctx, cancel := context.WithCancel(context.Background())
state := State{}
atomic.StoreInt32(&state.waiting, 0)
atomic.StoreInt32(&state.processing, 0)
atomic.StoreInt32(&state.completed, 0)
atomic.StoreInt32(&state.errors, 0)
scheduler := Scheduler{
items: make(chan interface{}, capacity),
backPressure: make([]interface{}, 0),
capacity: capacity,
canceler: cancel,
state: state,
}
scheduler.initializeWorkers(ctx, handler)
return &scheduler
}
I ran into a weird problem when using data received through channel from another goroutine to start multiple goroutines to reverse linked list simultaneously, which has troubled me for many days, I just want to split list into a couple of sublist without break link and then respectively start goroutine to reverse it, but I always get runtime error as below output shows when running code, I really don't know how to fix it after I tried many changes but still got the same error, can someone point out the problem or give me advice? any help you can give is welcome and appreciated, It would be nice if you can provide improved code, thanks in advance!
UPDATE: the problem was caused by memory corruption due to data race, it has been solved by using read-write lock!
Here is my code:
package main
import "sync"
type node struct {
data int
next *node
}
type LinkedList struct {
head *node
size int
}
type splitResult struct {
beforeHead, head, tail *node
}
func splitList(head *node, sizoflst, sizofsublst int) <-chan *splitResult {
nGoroutines := sizoflst / sizofsublst
if sizoflst < sizofsublst {
nGoroutines++
} else {
if (sizoflst % sizofsublst) >= 6 {
nGoroutines++
}
}
ch := make(chan *splitResult, nGoroutines)
go func() {
defer close(ch)
var beforeHead *node
tail := head
ct := 0
for i := 0; i < nGoroutines; i++ {
for ct < sizofsublst-1 && tail.next != nil {
tail = tail.next
ct++
}
if i == nGoroutines-1 {
testTail := tail
for testTail.next != nil {
testTail = testTail.next
}
ch <- &splitResult{beforeHead, head, testTail}
break
}
ch <- &splitResult{beforeHead, head, tail}
beforeHead = tail
head = tail.next
tail = head
ct = 0
}
}()
return ch
}
func reverse(split *splitResult, ln **node, wg *sync.WaitGroup) {
defer wg.Done()
move := split.head
prev := split.beforeHead
if split.tail.next == nil {
*ln = split.tail
}
for move != split.tail.next {
temp := move.next
move.next = prev
prev = move
move = temp
}
}
func (ll *LinkedList) Reverse(sizofsublst int) {
var lastNode *node
var wg sync.WaitGroup
if ll.head == nil || ll.head.next == nil {
return
}
splitCh := splitList(ll.head, ll.size, sizofsublst)
for split := range splitCh {
wg.Add(1)
go reverse(split, &lastNode, &wg)
}
wg.Wait()
ll.head = lastNode
}
func (ll *LinkedList) Insert(data int) {
newNode := new(node)
newNode.data = data
newNode.next = ll.head
ll.head = newNode
ll.size++
}
func main() {
ll := &LinkedList{}
sli := []int{19, 30, 7, 23, 24, 0, 12, 28, 3, 11, 18, 1, 31, 14, 21, 2, 9, 16, 4, 26, 10, 25}
for _, v := range sli {
ll.Insert(v)
}
ll.Reverse(8)
}
output:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x458db5]
goroutine 21 [running]:
main.reverse(0xc4200820a0, 0xc420096000, 0xc420098000)
/home/user/go/src/local/stackoverflow/tmp.go:69 +0x75
created by main.(*LinkedList).Reverse
/home/user/go/src/local/stackoverflow/tmp.go:85 +0x104
I think your problem is just assigning a value to Nil. Don't assign a Nil pointer in every place.
Code Here
The attached gist is a simple program using channels in a producer / multi-consumer model. For some reason,
go run channels.go prints all the results but does not return (and does not deadlock or at least go doesn't give me that panic that a deadlock occurs.)
type walkietalkie struct {
in chan int
out chan int
quit chan bool
}
var items []int = []int{
0, 1, 2, 3, 4, 5,
}
func work1(q walkietalkie) {
for {
select {
case a, more := <- q.in:
if more {
q.out <- a * 2
}
default:
break
}
}
}
func work2(q walkietalkie) {
for {
select {
case a, more := <- q.in:
if more {
q.out <- a * -1
}
default:
break
}
}
}
func work3(q walkietalkie) {
for {
select {
case a, more := <- q.in:
if more {
q.out <- a * 7
}
default:
break
}
}
}
func main() {
results := make(chan int, 18)
defer close(results)
w := []walkietalkie{
walkietalkie{ in: make(chan int, 6), out: results, quit: make(chan bool, 1) },
walkietalkie{ in: make(chan int, 6), out: results, quit: make(chan bool, 1) },
walkietalkie{ in: make(chan int, 6), out: results, quit: make(chan bool, 1) },
}
go work1(w[0])
go work2(w[1])
go work3(w[2])
// Iterate over work items
l := len(items)
for i, e := range items {
// Send the work item to each worker
for _, f := range w {
f.in <- e // send the work item
if i == l - 1 { // This is the last input, close the channel
close(f.in)
}
}
}
// Read all the results from the workers
for {
select {
case r, more := <-results:
if more {
fmt.Println(r)
} else {
continue
}
default:
break
}
}
}
You have a few problems.
For 1, reading from a channel with multiple return values like
case a, more := <-q.in
Will proceed on a closed channel, with more being set to false. In your case the default is never hit.
But those are in goroutines and wouldn't stop the program from exiting. The problem is your main goroutine is doing the same thing. Also, as it turns out, break will break out of selects as well as for loops. So if you want to break the for loop then you need to use a label and break LABEL.
As an alternative, you could also just return instead of breaking in your main goroutine.
Go-lang newbie here. I am trying out Go's Tour of Go, and came across an exercise about channels (https://tour.golang.org/concurrency/7).
The idea is to walk two trees and then evaluate if the trees are equivalent.
I wanted to solve this exercise using a select waiting for results from both channels. When both would finish I evaluate the resulting slice. Unfortunately the method goes on an infinite loop. I added some output to see what was happening and noticed that only one of the channels was being closed, and then opened again.
I am clearly doing something wrong, but I can't see what.
My question is what am I doing wrong? What assumption am I making regarding the closing of channels that makes the code below go into an infinite loop?
package main
import (
"golang.org/x/tour/tree"
"fmt"
)
// Walk walks the tree t sending all values
// from the tree to the channel ch.
func Walk(t *tree.Tree, ch chan int) {
_walk(t, ch)
close(ch)
}
func _walk(t *tree.Tree, ch chan int) {
if (t.Left != nil) {
_walk(t.Left, ch)
}
ch <- t.Value
if (t.Right != nil) {
_walk(t.Right, ch)
}
}
// Same determines whether the trees
// t1 and t2 contain the same values.
func Same(t1, t2 *tree.Tree) bool {
ch1 := make(chan int)
ch2 := make(chan int)
go Walk(t1, ch1)
go Walk(t2, ch2)
var out1 []int
var out2 []int
var tree1open, tree2open bool
var tree1val, tree2val int
for {
select {
case tree1val, tree1open = <- ch1:
out1 = append(out1, tree1val)
case tree2val, tree2open = <- ch2:
out2 = append(out2, tree2val)
default:
if (!tree1open && !tree2open) {
break
} else {
fmt.Println("Channel open?", tree1open, tree2open)
}
}
}
if (len(out1) != len(out2)) {
return false
}
for i := 0 ; i < len(out1) ; i++ {
if (out1[i] != out2[i]) {
return false
}
}
return true
}
func main() {
ch := make(chan int)
go Walk(tree.New(1), ch)
for i := range ch {
fmt.Println(i)
}
fmt.Println(Same(tree.New(1), tree.New(1)))
fmt.Println(Same(tree.New(1), tree.New(2)))
}
A "break" statement terminates execution of the innermost "for", "switch" or "select" statement.
see: http://golang.org/ref/spec#Break_statements
the break statement in your example terminates the select statement, the "innermost" statement.
so add label: ForLoop before for loop and add break ForLoop
ForLoop:
for {
select {
case tree1val, tree1open = <-ch1:
if tree1open {
out1 = append(out1, tree1val)
} else if !tree2open {
break ForLoop
}
case tree2val, tree2open = <-ch2:
if tree2open {
out2 = append(out2, tree2val)
} else if !tree1open {
break ForLoop
}
}
}
don't read the rest if you want to solve that problem yourself, and come back when you are done:
solution 1 (similar to yours):
package main
import "fmt"
import "golang.org/x/tour/tree"
// Walk walks the tree t sending all values
// from the tree to the channel ch.
func Walk(t *tree.Tree, ch chan int) {
_walk(t, ch)
close(ch)
}
func _walk(t *tree.Tree, ch chan int) {
if t.Left != nil {
_walk(t.Left, ch)
}
ch <- t.Value
if t.Right != nil {
_walk(t.Right, ch)
}
}
// Same determines whether the trees
// t1 and t2 contain the same values.
func Same(t1, t2 *tree.Tree) bool {
ch1, ch2 := make(chan int), make(chan int)
go Walk(t1, ch1)
go Walk(t2, ch2)
tree1open, tree2open := false, false
tree1val, tree2val := 0, 0
out1, out2 := make([]int, 0, 10), make([]int, 0, 10)
ForLoop:
for {
select {
case tree1val, tree1open = <-ch1:
if tree1open {
out1 = append(out1, tree1val)
} else if !tree2open {
break ForLoop
}
case tree2val, tree2open = <-ch2:
if tree2open {
out2 = append(out2, tree2val)
} else if !tree1open {
break ForLoop
}
}
}
if len(out1) != len(out2) {
return false
}
for i, v := range out1 {
if v != out2[i] {
return false
}
}
return true
}
func main() {
ch := make(chan int)
go Walk(tree.New(1), ch)
for i := range ch {
fmt.Println(i)
}
fmt.Println(Same(tree.New(1), tree.New(1)))
fmt.Println(Same(tree.New(1), tree.New(2)))
}
output:
1
2
3
4
5
6
7
8
9
10
true
false
another way:
package main
import "fmt"
import "golang.org/x/tour/tree"
// Walk walks the tree t sending all values
// from the tree to the channel ch.
func Walk(t *tree.Tree, ch chan int) {
_walk(t, ch)
close(ch)
}
func _walk(t *tree.Tree, ch chan int) {
if t != nil {
_walk(t.Left, ch)
ch <- t.Value
_walk(t.Right, ch)
}
}
// Same determines whether the trees
// t1 and t2 contain the same values.
func Same(t1, t2 *tree.Tree) bool {
ch1, ch2 := make(chan int), make(chan int)
go Walk(t1, ch1)
go Walk(t2, ch2)
for v := range ch1 {
if v != <-ch2 {
return false
}
}
return true
}
func main() {
ch := make(chan int)
go Walk(tree.New(1), ch)
for v := range ch {
fmt.Println(v)
}
fmt.Println(Same(tree.New(1), tree.New(1)))
fmt.Println(Same(tree.New(1), tree.New(2)))
}
output:
1
2
3
4
5
6
7
8
9
10
true
false
and see:
Go Tour Exercise: Equivalent Binary Trees
The suggestion by Amd is a valid one in the previous answer. However, looking at the problem you're trying to solve, it still does not solve the it. (If you run the program, it will output true for both the cases)
Here's the problem:
for {
select {
case tree1val, tree1open = <-ch1:
out1 = append(out1, tree1val)
case tree2val, tree2open = <-ch2:
out2 = append(out2, tree2val)
default:
//runtime.Gosched()
if !tree1open && !tree2open {
break ForLoop
} else {
fmt.Println("Channel open?", tree1open, tree2open)
}
}
}
In this case, since the default values of tree1open and tree2open are false (according to golang specification), it goes to the 'default' case, because select is non-blocking, and simply breaks from the ForLoop, without even filling the out1 and out2 slices (possibly, since these are goroutines). Hence the lengths of out1 and out2 remain zero, due to which it outputs true in most cases.
Here is the correction:
ForLoop:
for {
select {
case tree1val, tree1open = <-ch1:
if tree1open {
out1 = append(out1, tree1val)
}
if !tree1open && !tree2open {
break ForLoop
}
case tree2val, tree2open = <-ch2:
if tree2open {
out2 = append(out2, tree2val)
}
if !tree1open && !tree2open {
break ForLoop
}
default:
}
}
The key thing to note is that we have to check whether the channels have been closed in both cases (equivalent to saying whether tree1open and tree2open are both false). Here, it will correctly fill up out1 and out2 slices and then further compare their respective values.
The check for tree1open (or tree2open) being true has been added before append simply to avoid appending zero values to out1 (or out2).
Thanks,