Goroutine started but not executed or partially executed even if using WaitGroup to synchronize - go

I ran into a weird problem when using data received through channel from another goroutine to start multiple goroutines to reverse linked list simultaneously, which has troubled me for many days, I just want to split list into a couple of sublist without break link and then respectively start goroutine to reverse it, but I always get runtime error as below output shows when running code, I really don't know how to fix it after I tried many changes but still got the same error, can someone point out the problem or give me advice? any help you can give is welcome and appreciated, It would be nice if you can provide improved code, thanks in advance!
UPDATE: the problem was caused by memory corruption due to data race, it has been solved by using read-write lock!
Here is my code:
package main
import "sync"
type node struct {
data int
next *node
}
type LinkedList struct {
head *node
size int
}
type splitResult struct {
beforeHead, head, tail *node
}
func splitList(head *node, sizoflst, sizofsublst int) <-chan *splitResult {
nGoroutines := sizoflst / sizofsublst
if sizoflst < sizofsublst {
nGoroutines++
} else {
if (sizoflst % sizofsublst) >= 6 {
nGoroutines++
}
}
ch := make(chan *splitResult, nGoroutines)
go func() {
defer close(ch)
var beforeHead *node
tail := head
ct := 0
for i := 0; i < nGoroutines; i++ {
for ct < sizofsublst-1 && tail.next != nil {
tail = tail.next
ct++
}
if i == nGoroutines-1 {
testTail := tail
for testTail.next != nil {
testTail = testTail.next
}
ch <- &splitResult{beforeHead, head, testTail}
break
}
ch <- &splitResult{beforeHead, head, tail}
beforeHead = tail
head = tail.next
tail = head
ct = 0
}
}()
return ch
}
func reverse(split *splitResult, ln **node, wg *sync.WaitGroup) {
defer wg.Done()
move := split.head
prev := split.beforeHead
if split.tail.next == nil {
*ln = split.tail
}
for move != split.tail.next {
temp := move.next
move.next = prev
prev = move
move = temp
}
}
func (ll *LinkedList) Reverse(sizofsublst int) {
var lastNode *node
var wg sync.WaitGroup
if ll.head == nil || ll.head.next == nil {
return
}
splitCh := splitList(ll.head, ll.size, sizofsublst)
for split := range splitCh {
wg.Add(1)
go reverse(split, &lastNode, &wg)
}
wg.Wait()
ll.head = lastNode
}
func (ll *LinkedList) Insert(data int) {
newNode := new(node)
newNode.data = data
newNode.next = ll.head
ll.head = newNode
ll.size++
}
func main() {
ll := &LinkedList{}
sli := []int{19, 30, 7, 23, 24, 0, 12, 28, 3, 11, 18, 1, 31, 14, 21, 2, 9, 16, 4, 26, 10, 25}
for _, v := range sli {
ll.Insert(v)
}
ll.Reverse(8)
}
output:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x458db5]
goroutine 21 [running]:
main.reverse(0xc4200820a0, 0xc420096000, 0xc420098000)
/home/user/go/src/local/stackoverflow/tmp.go:69 +0x75
created by main.(*LinkedList).Reverse
/home/user/go/src/local/stackoverflow/tmp.go:85 +0x104

I think your problem is just assigning a value to Nil. Don't assign a Nil pointer in every place.
Code Here

Related

Concurrently Add Nodes to Linked List golang

I'm trying to add nodes to a linked list concurrently using channels and goroutines. I seem to be doing be something wrong, however. Here's what I've written so far.
Currently, my print function is just repeating the 8th node. This seems to work on other linked lists, so I don't totally understand the issue. Any help would be great. Here is the code that I wrote
func makeNodes(ctx context.Context, wg *sync.WaitGroup, ch chan Node) {
defer wg.Done()
for i := 0; i < 9; i++ {
tmp := Node{Data: i, Next: nil}
ch <- tmp
}
<-ctx.Done()
return
}
type Node struct {
Data int
Next *Node
}
type List struct {
Head *Node
Length int
Last *Node
}
func (l *List) addToEnd(n *Node) {
if l.Head == nil {
l.Head = n
l.Last = n
l.Length++
return
}
tmp := l.Last
tmp.Next = n
l.Last = n
l.Length++
}
func (l List) print() {
tmp := l.Head
for tmp != nil {
fmt.Println(tmp)
tmp = tmp.Next
}
fmt.Println("\n")
}
func main() {
cha := make(chan Node)
defer close(cha)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
var wg sync.WaitGroup
wg.Add(1)
list := List{nil, 0, nil}
go makeNodes(ctx, &wg, cha)
go func() {
for j := range cha {
list.addToEnd(&j)
}
}()
cancel()
wg.Wait()
list.print()
}
This program allocates a single structure (j in the for j:= range loop) and repeatedly overwrites it with the contents read from the channel.
This results in the same variable (j, at a fixed memory location) being added to the list multiple times.
Consider modifying the channel to be a channel of pointers.
In main()
cha := make(chan *Node)
Then for makeNodes()
Each time a new node is created (via Node{}), a new Node pointer is placed into the channel.
func makeNodes(ctx context.Context, wg *sync.WaitGroup, ch chan *Node) {
defer wg.Done()
for i := 0; i < 9; i++ {
tmp := Node{Data: i, Next: nil}
ch <- &tmp
}
<-ctx.Done()
return
}
The following will now correctly add each unique Node pointer to the list.
go func() {
for j := range cha {
list.addToEnd(j)
}
}()
Also, you may find not all entities make it to the list or are read from the channel. Your method for synchronizing the producer (makeNodes()) and consume (for j:= range) needs work.

How to collect values from a channel into a slice in Go?

Suppose I have a helper function helper(n int) which returns a slice of integers of variable length. I would like to run helper(n) in parallel for various values of n and collect the output in one big slice. My first attempt at this is the following:
package main
import (
"fmt"
"golang.org/x/sync/errgroup"
)
func main() {
out := make([]int, 0)
ch := make(chan int)
go func() {
for i := range ch {
out = append(out, i)
}
}()
g := new(errgroup.Group)
for n := 2; n <= 3; n++ {
n := n
g.Go(func() error {
for _, i := range helper(n) {
ch <- i
}
return nil
})
}
if err := g.Wait(); err != nil {
panic(err)
}
close(ch)
// time.Sleep(time.Second)
fmt.Println(out) // should have the same elements as [0 1 0 1 2]
}
func helper(n int) []int {
out := make([]int, 0)
for i := 0; i < n; i++ {
out = append(out, i)
}
return out
}
However, if I run this example I do not get all 5 expected values, instead I get
[0 1 0 1]
(If I uncomment the time.Sleep I do get all five values, [0 1 2 0 1], but this is not an acceptable solution).
It seems that the problem with this is that out is being updated in a goroutine, but the main function returns before it is done updating.
One thing that would work is using a buffered channel of size 5:
func main() {
ch := make(chan int, 5)
g := new(errgroup.Group)
for n := 2; n <= 3; n++ {
n := n
g.Go(func() error {
for _, i := range helper(n) {
ch <- i
}
return nil
})
}
if err := g.Wait(); err != nil {
panic(err)
}
close(ch)
out := make([]int, 0)
for i := range ch {
out = append(out, i)
}
fmt.Println(out) // should have the same elements as [0 1 0 1 2]
}
However, although in this simplified example I know what the size of the output should be, in my actual application this is not known a priori. Essentially what I would like is an 'infinite' buffer such that sending to the channel never blocks, or a more idiomatic way to achieve the same thing; I've read https://blog.golang.org/pipelines but wasn't able to find a close match to my use case. Any ideas?
In this version of the code, the execution is blocked until ch is closed.
ch is always closed at the end of a routine that is responsible to push into ch. Because the program pushes to ch in a routine, it is not needed to use a buffered channel.
package main
import (
"fmt"
"golang.org/x/sync/errgroup"
)
func main() {
ch := make(chan int)
go func() {
g := new(errgroup.Group)
for n := 2; n <= 3; n++ {
n := n
g.Go(func() error {
for _, i := range helper(n) {
ch <- i
}
return nil
})
}
if err := g.Wait(); err != nil {
panic(err)
}
close(ch)
}()
out := make([]int, 0)
for i := range ch {
out = append(out, i)
}
fmt.Println(out) // should have the same elements as [0 1 0 1 2]
}
func helper(n int) []int {
out := make([]int, 0)
for i := 0; i < n; i++ {
out = append(out, i)
}
return out
}
Here is the fixed version of the first code, it is convoluted but demonstrates the usage of sync.WaitGroup.
package main
import (
"fmt"
"sync"
"golang.org/x/sync/errgroup"
)
func main() {
out := make([]int, 0)
ch := make(chan int)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
for i := range ch {
out = append(out, i)
}
}()
g := new(errgroup.Group)
for n := 2; n <= 3; n++ {
n := n
g.Go(func() error {
for _, i := range helper(n) {
ch <- i
}
return nil
})
}
if err := g.Wait(); err != nil {
panic(err)
}
close(ch)
wg.Wait()
// time.Sleep(time.Second)
fmt.Println(out) // should have the same elements as [0 1 0 1 2]
}
func helper(n int) []int {
out := make([]int, 0)
for i := 0; i < n; i++ {
out = append(out, i)
}
return out
}

How could I read a text file and implement a tree based on those values?

I'm trying to implement DFS (Depth First Search) algorithm using Go, but my actual code need to add node by node to build the tree, manually. I want to read a text file, with this data (example):
75
95 64
17 47 82
18 35 87 10
20 04 83 47 65
And build the tree with these values. The root value will be 75,left 95, right 64 and so on.
This is my complete code:
// Package main implements the DFS algorithm
package main
import (
"bufio"
"flag"
"fmt"
"log"
"os"
"strconv"
"strings"
"sync"
)
// Node handle all the tree data
type Node struct {
Data interface {}
Left *Node
Right *Node
}
// NewNode creates a new node to the tree
func NewNode(data interface{}) *Node {
node := new(Node)
node.Data = data
node.Left = nil
node.Right = nil
return node
}
// FillNodes create all the nodes based on each value on file
func FillNodes(lines *[][]string) {
nodes := *lines
rootInt, _ := strconv.Atoi(nodes[0][0])
root := NewNode(rootInt)
// add the values here
wg.Add(1)
go root.DFS()
wg.Wait()
}
// ProcessNode checks and print the actual node
func (n *Node) ProcessNode() {
defer wg.Done()
var hello []int
for i := 0; i < 10000; i++ {
hello = append(hello, i)
}
fmt.Printf("Node %v\n", n.Data)
}
// DFS calls itself on each node
func (n *Node) DFS() {
defer wg.Done()
if n == nil {
return
}
wg.Add(1)
go n.Left.DFS()
wg.Add(1)
go n.ProcessNode()
wg.Add(1)
go n.Right.DFS()
}
// CheckError handle erros check
func CheckError(err error) {
if err != nil {
log.Fatal(err)
}
}
// OpenFile handle reading data from a text file
func OpenFile() [][]string {
var lines [][]string
ftpr := flag.String("fpath", "pyramid2.txt", "./pyramid2.txt")
flag.Parse()
f, err := os.Open(*ftpr)
CheckError(err)
defer func() {
if err := f.Close(); err != nil {
log.Fatal(err)
}
}()
s := bufio.NewScanner(f)
for s.Scan() {
line := strings.Fields(s.Text())
lines = append(lines, line)
}
err = s.Err()
CheckError(err)
return lines
}
var wg sync.WaitGroup
// Main creates the tree and call DFS
func main() {
nodes := OpenFile()
FillNodes(&nodes)
}
What would be a possible solution to this? Also, How I could convert all those string to int on a easy way?
Here is a method for the creation of the tree (didn't test it):
func FillLevel(parents []*Node, level []string) (children []*Node, err error){
if len(parents) + 1 != len(level) {
return nil, errors.New("params size not OK")
}
for i, p := range parents {
leftVal, err := strconv.Atoi(level[i])
rightVal, err := strconv.Atoi(level[i+1])
if err != nil {
return nil, err
}
p.Left = NewNode(leftVal)
p.Right = NewNode(rightVal)
children = append(children, p.Left)
if i == len(parents) - 1 {
children = append(children, p.Right)
}
}
return children, nil
}
func FillNodes(lines *[][]string) (*Node, error){
nodes := *lines
rootInt, _ := strconv.Atoi(nodes[0][0])
root := NewNode(rootInt)
// add the values here
parents := []*Node{root}
for _, level := range nodes[1:] {
parents, _ = FillLevel(parents, level)
}
return root, nil
}
func main() {
nodes := OpenFile()
r, _ := FillNodes(&nodes)
wg.Add(1)
r.DFS()
wg.Wait()
}
If this is for production, my advice is to TDD it, and to handle all the errors correctly and decide what your software should do about each one of them. You can also write some benchmarks, and then optimize the algorithm using goroutines (if applicable)
The way you're doing right now, you're better off without goroutines:
Imagine you have a huge tree, with 1M nodes, the DFS func will recursively launch 1M goroutines, each one of them has a memory & CPU additional cost without doing much to justify it. You need a better way of splitting the work to do on far fewer goroutines, maybe 10000 nodes per each goroutine.
I would strongly advise you to write a version without goroutines, study it's complexity, write benchmarks to validate the expected complexity. Once you have that, start looking for a strategy to introduce goroutines, and validate that it's more efficient that what you already have.

Go program does not deadlock but also never returns

The attached gist is a simple program using channels in a producer / multi-consumer model. For some reason,
go run channels.go prints all the results but does not return (and does not deadlock or at least go doesn't give me that panic that a deadlock occurs.)
type walkietalkie struct {
in chan int
out chan int
quit chan bool
}
var items []int = []int{
0, 1, 2, 3, 4, 5,
}
func work1(q walkietalkie) {
for {
select {
case a, more := <- q.in:
if more {
q.out <- a * 2
}
default:
break
}
}
}
func work2(q walkietalkie) {
for {
select {
case a, more := <- q.in:
if more {
q.out <- a * -1
}
default:
break
}
}
}
func work3(q walkietalkie) {
for {
select {
case a, more := <- q.in:
if more {
q.out <- a * 7
}
default:
break
}
}
}
func main() {
results := make(chan int, 18)
defer close(results)
w := []walkietalkie{
walkietalkie{ in: make(chan int, 6), out: results, quit: make(chan bool, 1) },
walkietalkie{ in: make(chan int, 6), out: results, quit: make(chan bool, 1) },
walkietalkie{ in: make(chan int, 6), out: results, quit: make(chan bool, 1) },
}
go work1(w[0])
go work2(w[1])
go work3(w[2])
// Iterate over work items
l := len(items)
for i, e := range items {
// Send the work item to each worker
for _, f := range w {
f.in <- e // send the work item
if i == l - 1 { // This is the last input, close the channel
close(f.in)
}
}
}
// Read all the results from the workers
for {
select {
case r, more := <-results:
if more {
fmt.Println(r)
} else {
continue
}
default:
break
}
}
}
You have a few problems.
For 1, reading from a channel with multiple return values like
case a, more := <-q.in
Will proceed on a closed channel, with more being set to false. In your case the default is never hit.
But those are in goroutines and wouldn't stop the program from exiting. The problem is your main goroutine is doing the same thing. Also, as it turns out, break will break out of selects as well as for loops. So if you want to break the for loop then you need to use a label and break LABEL.
As an alternative, you could also just return instead of breaking in your main goroutine.

continue executing loop when exception happens

During first execution of the loop in the code snippet,an exception throws.my question is how to continue executing next iteration of the loop when panic happens.like try catch mechanism in java,the loop will continue executing next iteration of the loop.
package main
import (
"fmt"
)
func main() {
var arr []int = []int{5, 6, 7, 8, 9}
fmt.Println(arr)
for i := 6; i < 10; i++ {
defer func() {
fmt.Println("aaa")
if err := recover(); err != nil {
fmt.Printf("error is %v\n", err)
}
}()
arr[i] = i
}
}
The issue is that your slice has a length and capacity of 5,
https://play.golang.org/p/7wy91PTPum
and you are trying to add something to the 6th position.
You need to either set a fixed size that you know will hold everything you want to put into it:
var arr [10]int = [10]int{5, 6, 7, 8, 9}
https://play.golang.org/p/GSNDXGt1Jp
Or use append and change
arr[i] = i
to
arr = append(arr, i)
https://play.golang.org/p/kHNsFpcjVx
You could wrap all work inside some func, and call defer with recover inside it
package main
import (
"fmt"
)
func doSmth(arr []int, idx int) {
defer func() {
if err := recover(); err != nil {
fmt.Printf("error is %v\n", err)
}
}()
arr[idx] = idx
}
func main() {
var arr []int = []int{5, 6, 7, 8, 9}
fmt.Println(arr)
for i := 6; i < 10; i++ {
doSmth(arr, i)
}
}

Resources