I have a function that recursively spawns goroutines to walk a DOM tree, putting the nodes they find into a channel shared between all of them.
import (
"golang.org/x/net/html"
"sync"
)
func walk(doc *html.Node, ch chan *html.Node) {
var wg sync.WaitGroup
defer close(ch)
var f func(*html.Node)
f = func(n *html.Node) {
defer wg.Done()
ch <- n
for c := n.FirstChild; c != nil; c = c.NextSibling {
wg.Add(1)
go f(c)
}
}
wg.Add(1)
go f(doc)
wg.Wait()
}
Which I'd call like
// get the webpage using http
// parse the html into doc
ch := make(chan *html.Node)
go walk(doc, ch)
for c := range ch {
if someCondition(c) {
// do something with c
// quit all goroutines spawned by walk
}
}
I am wondering how I could quit all of these goroutines--i.e. close ch--once I have found a node of a certain type or some other condition has been fulfilled. I have tried using a quit channel that'd be polled before spawning the new goroutines and close ch if a value was received but that lead to race conditions where some goroutines tried sending on the channel that had just been closed by another one. I was pondering using a mutex but it seems inelegant and against the spirit of go to protect a channel with a mutex. Is there an idiomatic way to do this using channels? If not, is there any way at all? Any input appreciated!
The context package provides similar functionality. Using context.Context with a few Go-esque patterns, you can achieve what you need.
To start you can check this article to get a better feel of cancellation with context: https://www.sohamkamani.com/blog/golang/2018-06-17-golang-using-context-cancellation/
Also make sure to check the official GoDoc: https://golang.org/pkg/context/
So to achieve this functionality your function should look more like:
func walk(ctx context.Context, doc *html.Node, ch chan *html.Node) {
var wg sync.WaitGroup
defer close(ch)
var f func(*html.Node)
f = func(n *html.Node) {
defer wg.Done()
ch <- n
for c := n.FirstChild; c != nil; c = c.NextSibling {
select {
case <-ctx.Done():
return // quit the function as it is cancelled
default:
wg.Add(1)
go f(c)
}
}
}
select {
case <-ctx.Done():
return // perhaps it was cancelled so quickly
default:
wg.Add(1)
go f(doc)
wg.Wait()
}
}
And when calling the function, you will have something like:
// ...
ctx, cancelFunc := context.WithCancel(context.Background())
walk(ctx, doc, ch)
for value := range ch {
// ...
if someCondition {
cancelFunc()
// the for loop will automatically exit as the channel is being closed for the inside
}
}
Related
func GoCountColumns(in chan []string, r chan Result, quit chan int) {
for {
select {
case data := <-in:
r <- countColumns(data) // some calculation function
case <-quit:
return // stop goroutine
}
}
}
func main() {
fmt.Println("Welcome to the csv Calculator")
file_path := os.Args[1]
fd, _ := os.Open(file_path)
reader := csv.NewReader(bufio.NewReader(fd))
var totalColumnsCount int64 = 0
var totallettersCount int64 = 0
linesCount := 0
numWorkers := 10000
rc := make(chan Result, numWorkers)
in := make(chan []string, numWorkers)
quit := make(chan int)
t1 := time.Now()
for i := 0; i < numWorkers; i++ {
go GoCountColumns(in, rc, quit)
}
//start worksers
go func() {
for {
record, err := reader.Read()
if err == io.EOF {
break
}
if err != nil {
log.Fatal(err)
}
if linesCount%1000000 == 0 {
fmt.Println("Adding to the channel")
}
in <- record
//data := countColumns(record)
linesCount++
//totalColumnsCount = totalColumnsCount + data.ColumnCount
//totallettersCount = totallettersCount + data.LettersCount
}
close(in)
}()
for i := 0; i < numWorkers; i++ {
quit <- 1 // quit goroutines from main
}
close(rc)
for i := 0; i < linesCount; i++ {
data := <-rc
totalColumnsCount = totalColumnsCount + data.ColumnCount
totallettersCount = totallettersCount + data.LettersCount
}
fmt.Printf("I counted %d lines\n", linesCount)
fmt.Printf("I counted %d columns\n", totalColumnsCount)
fmt.Printf("I counted %d letters\n", totallettersCount)
elapsed := time.Now().Sub(t1)
fmt.Printf("It took %f seconds\n", elapsed.Seconds())
}
My Hello World is a program that reads a csv file and passes it to a channel. Then the goroutines should consume from this channel.
My Problem is I have no idea how to detect from the main thread that all data was processed and I can exit my program.
on top of other answers.
Take (great) care that closing a channel should happen on the write call site, not the read call site. In GoCountColumns the r channel being written, the responsibility to close the channel are onto GoCountColumns function. Technical reasons are, it is the only actor knowing for sure that the channel will not being written anymore and thus is safe for close.
func GoCountColumns(in chan []string, r chan Result, quit chan int) {
defer close(r) // this line.
for {
select {
case data := <-in:
r <- countColumns(data) // some calculation function
case <-quit:
return // stop goroutine
}
}
}
The function parameters naming convention, if i might say, is to have the destination as first parameter, the source as second, and others parameters along. The GoCountColumns is preferably written:
func GoCountColumns(dst chan Result, src chan []string, quit chan int) {
defer close(dst)
for {
select {
case data := <-src:
dst <- countColumns(data) // some calculation function
case <-quit:
return // stop goroutine
}
}
}
You are calling quit right after the process started. Its illogical. This quit command is a force exit sequence, it should be called once an exit signal is detected, to force exit the current processing in best state possible, possibly all broken. In other words, you should be relying on the signal.Notify package to capture exit events, and notify your workers to quit. see https://golang.org/pkg/os/signal/#example_Notify
To write better parallel code, list at first the routines you need to manage the program lifetime, identify those you need to block onto to ensure the program has finished before exiting.
In your code, exists read, map. To ensure complete processing, the program main function must ensure that it captures a signal when map exits before exiting itself. Notice that the read function does not matter.
Then, you will also need the code required to capture an exit event from user input.
Overall, it appears we need to block onto two events to manage lifetime. Schematically,
func main(){
go read()
go map(mapDone)
go signal()
select {
case <-mapDone:
case <-sig:
}
}
This simple code is good to process or die. Indeed, when the user event is caught, the program exits immediately, without giving a chance to others routines to do something required upon stop.
To improve those behaviors, you need first a way to signal the program wants to leave to other routines, second, a way to wait for those routines to finish their stop sequence before leaving.
To signal exit event, or cancellation, you can make use of a context.Context, pass it around to the workers, make them listen to it.
Again, schematically,
func main(){
ctx,cancel := context.WithCancel(context.WithBackground())
go read(ctx)
go map(ctx,mapDone)
go signal()
select {
case <-mapDone:
case <-sig:
cancel()
}
}
(more onto read and map later)
To wait for completion, many things are possible, for as long as they are thread safe. Usually, a sync.WaitGroup is being used. Or, in cases like yours where there is only one routine to wait for, we can re use the current mapDone channel.
func main(){
ctx,cancel := context.WithCancel(context.WithBackground())
go read(ctx)
go map(ctx,mapDone)
go signal()
select {
case <-mapDone:
case <-sig:
cancel()
<-mapDone
}
}
That is simple and straight forward. But it is not totally correct. The last mapDone chan might block forever and make the program unstoppable. So you might implement a second signal handler, or a timeout.
Schematically, the timeout solution is
func main(){
ctx,cancel := context.WithCancel(context.WithBackground())
go read(ctx)
go map(ctx,mapDone)
go signal()
select {
case <-mapDone:
case <-sig:
cancel()
select {
case <-mapDone:
case <-time.After(time.Second):
}
}
}
You might also accumulate a signal handling and a timeout in the last select.
Finally, there are few things to tell about read and map context listening.
Starting with map, the implementation requires to read for context.Done channel regularly to detect cancellation.
It is the easy part, it requires to only update the select statement.
func GoCountColumns(ctx context.Context, dst chan Result, src chan []string) {
defer close(dst)
for {
select {
case <-ctx.Done():
<-time.After(time.Minute) // do something more useful.
return // quit. Notice the defer will be called.
case data := <-src:
dst <- countColumns(data) // some calculation function
}
}
}
Now the read part is bit more tricky as it is an IO it does not provide a selectable programming interface and listening to the context channel cancellation might seem contradictory. It is. As IOs are blocking, impossible to listen the context. And while reading from the context channel, impossible to read the IO. In your case, the solution requires to understand that your read loop is not relevant to your program lifetime (recall we only listen onto mapDone?), and that we can just ignore the context.
In other cases, if for example you wanted to restart at last byte read (so at every read, we increment an n, counting bytes, and we want to save that value upon stop). Then, a new routine is required to be started, and thus, multiple routines are to wait for completion. In such cases a sync.WaitGroup will be more appropriate.
Schematically,
func main(){
var wg sync.WaitGroup
processDone:=make(chan struct{})
ctx,cancel := context.WithCancel(context.WithBackground())
go read(ctx)
wg.Add(1)
go saveN(ctx,&wg)
wg.Add(1)
go map(ctx,&wg)
go signal()
go func(){
wg.Wait()
close(processDone)
}()
select {
case <-processDone:
case <-sig:
cancel()
select {
case <-processDone:
case <-time.After(time.Second):
}
}
}
In this last code, the waitgroup is being passed around. Routines are responsible to call for wg.Done(), when all routines are done, the processDone channel is closed, to signal the select.
func GoCountColumns(ctx context.Context, dst chan Result, src chan []string, wg *sync.WaitGroup) {
defer wg.Done()
defer close(dst)
for {
select {
case <-ctx.Done():
<-time.After(time.Minute) // do something more useful.
return // quit. Notice the defer will be called.
case data := <-src:
dst <- countColumns(data) // some calculation function
}
}
}
It is undecided which patterns is preferred, but you might also see waitgroup being managed at call sites only.
func main(){
var wg sync.WaitGroup
processDone:=make(chan struct{})
ctx,cancel := context.WithCancel(context.WithBackground())
go read(ctx)
wg.Add(1)
go func(){
defer wg.Done()
saveN(ctx)
}()
wg.Add(1)
go func(){
defer wg.Done()
map(ctx)
}()
go signal()
go func(){
wg.Wait()
close(processDone)
}()
select {
case <-processDone:
case <-sig:
cancel()
select {
case <-processDone:
case <-time.After(time.Second):
}
}
}
Beyond all of that and OP questions, you must always evaluate upfront the pertinence of parallel processing for a given task. There is no unique recipe, practice and measure your code performances. see pprof.
There is way too much going on in this code. You should restructure your code into short functions that serve specific purposes to make it possible for someone to help you out easily (and help yourself as well).
You should read the following Go article, which goes into concurrency patterns:
https://blog.golang.org/pipelines
There are multiple ways to make one go-routine wait on some other work to finish. The most common ways are with wait groups (example I have provided) or channels.
func processSomething(...) {
...
}
func main() {
workers := &sync.WaitGroup{}
for i := 0; i < numWorkers; i++ {
workers.Add(1) // you want to call this from the calling go-routine and before spawning the worker go-routine
go func() {
defer workers.Done() // you want to call this from the worker go-routine when the work is done (NOTE the defer, which ensures it is called no matter what)
processSomething(....) // your async processing
}()
}
// this will block until all workers have finished their work
workers.Wait()
}
You can use a channel to block main until completion of a goroutine.
package main
import (
"log"
"time"
)
func main() {
c := make(chan struct{})
go func() {
time.Sleep(3 * time.Second)
log.Println("bye")
close(c)
}()
// This blocks until the channel is closed by the routine
<-c
}
No need to write anything into the channel. Reading is blocked until data is read or, which we use here, the channel is closed.
I need several goroutines to write in the same channel. Then all the data is read in one place until all the goroutines complete the process. But I'm not sure how best to close this channel.
this is my example implementation:
func main() {
ch := make(chan data)
wg := &sync.WaitGroup{}
for instance := range dataSet {
wg.Add(1)
go doStuff(ch, instance)
}
go func() {
wg.Wait()
close(ch)
}()
for v := range ch { //range until it closes
//proceed v
}
}
func doStuff(ch chan data, instance data) {
//do some stuff with instance...
ch <- instance
}
but I'm not sure that it is idiomatic.
As you are using WaitGroup and increase the counter at the time of starting new goroutine, you have to notify WaitGroup when a goroutine is finished by calling the Done() method. Also you have to pass the same WaitGroup to the goroutine. You can do it by passing the address of WaitGroup. Otherwise each goroutine will use it's own WaitGroup which will be on different scope.
func main() {
ch := make(chan data)
wg := &sync.WaitGroup{}
for _, instance := range dataSet {
wg.Add(1)
go doStuff(ch, instance, wg)
}
go func() {
wg.Wait()
close(ch)
}()
for v := range ch { //range until it closes
//proceed v
}
}
func doStuff(ch chan data, instance data, wg *sync.WaitGroup) {
//do some stuff with instance...
ch <- instance
// call done method to decrease the counter of WaitGroup
wg.Done()
}
I have an issue. Here is example: https://play.golang.org/p/QSWY2INQuSE
func Avg(c chan string, wg *sync.WaitGroup) {
defer wg.Done()
c <- "test"
}
func main() {
var wg sync.WaitGroup
c := make(chan string)
timer1 := time.NewTicker(5 * time.Second)
for {
select {
case <-timer1.C:
wg.Add(1)
go Avg(c, &wg)
wg.Wait()
}
}
fmt.Println(<-c)
}
Why data does not reach fmt.Println(<-c)
Thank you!
Because you have an endless for, so the last fmt.Println() statement is never reached.
You have to break out of the loop if you want the last fmt.Println() statement to ever execute, for example:
loop:
for {
select {
case <-timer1.C:
wg.Add(1)
go Avg(c, &wg)
wg.Wait()
break loop
}
}
fmt.Println(<-c)
Note that you have to use a label, else the break would only break out of the select statement (and not from the for loop).
Also note that this alone won't work, as the channel is unbuffered, and thus Avg() will be blocked forever, trying to send a value on c while noone is ever trying to receive from it.
This simple example can be made working if you create the channel to be buffered:
c := make(chan string, 1) // Buffer for 1 value
Now it works and prints (try it on the Go Playground):
test
I'm here to find out the most idiomatic way to do the follow task.
Task:
Write data from a channel to a file.
Problem:
I have a channel ch := make(chan int, 100)
I need to read from the channel and write the values I read from the channel to a file. My question is basically how do I do so given that
If channel ch is full, write the values immediately
If channel ch is not full, write every 5s.
So essentially, data needs to be written to the file at least every 5s (assuming that data will be filled into the channel at least every 5s)
Whats the best way to use select, for and range to do my above task?
Thanks!
There is no such "event" as "buffer of channel is full", so you can't detect that [*]. This means you can't idiomatically solve your problem with language primitives using only 1 channel.
[*] Not entirely true: you could detect if the buffer of a channel is full by using select with default case when sending on the channel, but that requires logic from the senders, and repetitive attempts to send.
I would use another channel from which I would receive as values are sent on it, and "redirect", store the values in another channel which has a buffer of 100 as you mentioned. At each redirection you may check if the internal channel's buffer is full, and if so, do an immediate write. If not, continue to monitor the "incoming" channel and a timer channel with a select statement, and if the timer fires, do a "regular" write.
You may use len(chInternal) to check how many elements are in the chInternal channel, and cap(chInternal) to check its capacity. Note that this is "safe" as we are the only goroutine handling the chInternal channel. If there would be multiple goroutines, value returned by len(chInternal) could be outdated by the time we use it to something (e.g. comparing it).
In this solution chInternal (as its name says) is for internal use only. Others should only send values on ch. Note that ch may or may not be a buffered channel, solution works in both cases. However, you may improve efficiency if you also give some buffer to ch (so chances that senders get blocked will be lower).
var (
chInternal = make(chan int, 100)
ch = make(chan int) // You may (should) make this a buffered channel too
)
func main() {
delay := time.Second * 5
timer := time.NewTimer(delay)
for {
select {
case v := <-ch:
chInternal <- v
if len(chInternal) == cap(chInternal) {
doWrite() // Buffer is full, we need to write immediately
timer.Reset(delay)
}
case <-timer.C:
doWrite() // "Regular" write: 5 seconds have passed since last write
timer.Reset(delay)
}
}
}
If an immediate write happens (due to a "buffer full" situation), this solution will time the next "regular" write 5 seconds after this. If you don't want this and you want the 5-second regular writes be independent from the immediate writes, simply do not reset the timer following the immediate write.
An implementation of doWrite() may be as follows:
var f *os.File // Make sure to open file for writing
func doWrite() {
for {
select {
case v := <-chInternal:
fmt.Fprintf(f, "%d ", v) // Write v to the file
default: // Stop when no more values in chInternal
return
}
}
}
We can't use for ... range as that only returns when the channel is closed, but our chInternal channel is not closed. So we use a select with a default case so when no more values are in the buffer of chInternal, we return.
Improvements
Using a slice instead of 2nd channel
Since the chInternal channel is only used by us, and only on a single goroutine, we may also choose to use a single []int slice instead of a channel (reading/writing a slice is much faster than a channel).
Showing only the different / changed parts, it could look something like this:
var (
buf = make([]int, 0, 100)
)
func main() {
// ...
for {
select {
case v := <-ch:
buf = append(buf, v)
if len(buf) == cap(buf) {
// ...
}
}
func doWrite() {
for _, v := range buf {
fmt.Fprintf(f, "%d ", v) // Write v to the file
}
buf = buf[:0] // "Clear" the buffer
}
With multiple goroutines
If we stick to leave chInternal a channel, the doWrite() function may be called on another goroutine to not block the other one, e.g. go doWrite(). Since data to write is read from a channel (chInternal), this requires no further synchronization.
if you just use 5 seconds write, to increase the file write performance,
you may fill the channel any time you need,
then writer goroutine writes that data to the buffered file,
see this very simple and idiomatic sample without using timer
with just using for...range:
package main
import (
"bufio"
"fmt"
"os"
"sync"
)
var wg sync.WaitGroup
func WriteToFile(filename string, ch chan int) {
f, e := os.Create(filename)
if e != nil {
panic(e)
}
w := bufio.NewWriterSize(f, 4*1024*1024)
defer wg.Done()
defer f.Close()
defer w.Flush()
for v := range ch {
fmt.Fprintf(w, "%d ", v)
}
}
func main() {
ch := make(chan int, 100)
wg.Add(1)
go WriteToFile("file.txt", ch)
for i := 0; i < 500000; i++ {
ch <- i // do the job
}
close(ch) // Finish the job and close output file
wg.Wait()
}
and notice the defers order.
and in case of 5 seconds write, you may add one interval timer just to flush the buffer of this file to the disk, like this:
package main
import (
"bufio"
"fmt"
"os"
"sync"
"time"
)
var wg sync.WaitGroup
func WriteToFile(filename string, ch chan int) {
f, e := os.Create(filename)
if e != nil {
panic(e)
}
w := bufio.NewWriterSize(f, 4*1024*1024)
ticker := time.NewTicker(5 * time.Second)
quit := make(chan struct{})
go func() {
for {
select {
case <-ticker.C:
if w.Buffered() > 0 {
fmt.Println(w.Buffered())
w.Flush()
}
case <-quit:
ticker.Stop()
return
}
}
}()
defer wg.Done()
defer f.Close()
defer w.Flush()
defer close(quit)
for v := range ch {
fmt.Fprintf(w, "%d ", v)
}
}
func main() {
ch := make(chan int, 100)
wg.Add(1)
go WriteToFile("file.txt", ch)
for i := 0; i < 25; i++ {
ch <- i // do the job
time.Sleep(500 * time.Millisecond)
}
close(ch) // Finish the job and close output file
wg.Wait()
}
here I used time.NewTicker(5 * time.Second) for interval timer with quit channel, you may use time.AfterFunc() or time.Tick() or time.Sleep().
with some optimizations ( removing quit channel):
package main
import (
"bufio"
"fmt"
"os"
"sync"
"time"
)
var wg sync.WaitGroup
func WriteToFile(filename string, ch chan int) {
f, e := os.Create(filename)
if e != nil {
panic(e)
}
w := bufio.NewWriterSize(f, 4*1024*1024)
ticker := time.NewTicker(5 * time.Second)
defer wg.Done()
defer f.Close()
defer w.Flush()
for {
select {
case v, ok := <-ch:
if ok {
fmt.Fprintf(w, "%d ", v)
} else {
fmt.Println("done.")
ticker.Stop()
return
}
case <-ticker.C:
if w.Buffered() > 0 {
fmt.Println(w.Buffered())
w.Flush()
}
}
}
}
func main() {
ch := make(chan int, 100)
wg.Add(1)
go WriteToFile("file.txt", ch)
for i := 0; i < 25; i++ {
ch <- i // do the job
time.Sleep(500 * time.Millisecond)
}
close(ch) // Finish the job and close output file
wg.Wait()
}
I hope this helps.
I have some issues with the following code:
package main
import (
"fmt"
"sync"
)
// This program should go to 11, but sometimes it only prints 1 to 10.
func main() {
ch := make(chan int)
var wg sync.WaitGroup
wg.Add(2)
go Print(ch, wg) //
go func(){
for i := 1; i <= 11; i++ {
ch <- i
}
close(ch)
defer wg.Done()
}()
wg.Wait() //deadlock here
}
// Print prints all numbers sent on the channel.
// The function returns when the channel is closed.
func Print(ch <-chan int, wg sync.WaitGroup) {
for n := range ch { // reads from channel until it's closed
fmt.Println(n)
}
defer wg.Done()
}
I get a deadlock at the specified place. I have tried setting wg.Add(1) instead of 2 and it solves my problem. My belief is that I'm not successfully sending the channel as an argument to the Printer function. Is there a way to do that? Otherwise, a solution to my problem is replacing the go Print(ch, wg)line with:
go func() {
Print(ch)
defer wg.Done()
}
and changing the Printer function to:
func Print(ch <-chan int) {
for n := range ch { // reads from channel until it's closed
fmt.Println(n)
}
}
What is the best solution?
Well, first your actual error is that you're giving the Print method a copy of the sync.WaitGroup, so it doesn't call the Done() method on the one you're Wait()ing on.
Try this instead:
package main
import (
"fmt"
"sync"
)
func main() {
ch := make(chan int)
var wg sync.WaitGroup
wg.Add(2)
go Print(ch, &wg)
go func() {
for i := 1; i <= 11; i++ {
ch <- i
}
close(ch)
defer wg.Done()
}()
wg.Wait() //deadlock here
}
func Print(ch <-chan int, wg *sync.WaitGroup) {
for n := range ch { // reads from channel until it's closed
fmt.Println(n)
}
defer wg.Done()
}
Now, changing your Print method to remove the WaitGroup of it is a generally good idea: the method doesn't need to know something is waiting for it to finish its job.
I agree with #Elwinar's solution, that the main problem in your code caused by passing a copy of your Waitgroup to the Print function.
This means the wg.Done() is operated on a copy of wg you defined in the main. Therefore, wg in the main could not get decreased, and thus a deadlock happens when you wg.Wait() in main.
Since you are also asking about the best practice, I could give you some suggestions of my own:
Don't remove defer wg.Done() in Print. Since your goroutine in main is a sender, and print is a receiver, removing wg.Done() in receiver routine will cause an unfinished receiver. This is because only your sender is synced with your main, so after your sender is done, your main is done, but it's possible that the receiver is still working. My point is: don't leave some dangling goroutines around after your main routine is finished. Close them or wait for them.
Remember to do panic recovery everywhere, especially anonymous goroutine. I have seen a lot of golang programmers forgetting to put panic recovery in goroutines, even if they remember to put recover in normal functions. It's critical when you want your code to behave correctly or at least gracefully when something unexpected happened.
Use defer before every critical calls, like sync related calls, at the beginning since you don't know where the code could break. Let's say you removed defer before wg.Done(), and a panic occurrs in your anonymous goroutine in your example. If you don't have panic recover, it will panic. But what happens if you have a panic recover? Everything's fine now? No. You will get deadlock at wg.Wait() since your wg.Done() gets skipped because of panic! However, by using defer, this wg.Done() will be executed at the end, even if panic happened. Also, defer before close is important too, since its result also affects the communication.
So here is the code modified according to the points I mentioned above:
package main
import (
"fmt"
"sync"
)
func main() {
ch := make(chan int)
var wg sync.WaitGroup
wg.Add(2)
go Print(ch, &wg)
go func() {
defer func() {
if r := recover(); r != nil {
println("panic:" + r.(string))
}
}()
defer func() {
wg.Done()
}()
for i := 1; i <= 11; i++ {
ch <- i
if i == 7 {
panic("ahaha")
}
}
println("sender done")
close(ch)
}()
wg.Wait()
}
func Print(ch <-chan int, wg *sync.WaitGroup) {
defer func() {
if r := recover(); r != nil {
println("panic:" + r.(string))
}
}()
defer wg.Done()
for n := range ch {
fmt.Println(n)
}
println("print done")
}
Hope it helps :)