I'm using Go with the gotk3 and webkit2 libraries to try and build a web crawler that can parse JavaScript in the context of a WebKitWebView.
Thinking of performance, I'm trying to figure out what would be the best way to have it crawl concurrently (if not in parallel, with multiple processors), using all available resources.
GTK and everything with threads and goroutines are pretty new to me. Reading from the gotk3 goroutines example, it states:
Native GTK is not thread safe, and thus, gotk3's GTK bindings may not be used from other goroutines. Instead, glib.IdleAdd() must be used to add a function to run in the GTK main loop when it is in an idle state.
Go will panic and show a stack trace when I try to run a function, which creates a new WebView, in a goroutine. I'm not exactly sure why this happens, but I think it has something to do with this comment. An example is shown below.
Current Code
Here's my current code, which has been adapted from the webkit2 example:
package main
import (
"fmt"
"github.com/gotk3/gotk3/glib"
"github.com/gotk3/gotk3/gtk"
"github.com/sourcegraph/go-webkit2/webkit2"
"github.com/sqs/gojs"
)
func crawlPage(url string) {
web := webkit2.NewWebView()
web.Connect("load-changed", func(_ *glib.Object, i int) {
loadEvent := webkit2.LoadEvent(i)
switch loadEvent {
case webkit2.LoadFinished:
fmt.Printf("Load finished for: %v\n", url)
web.RunJavaScript("window.location.hostname", func(val *gojs.Value, err error) {
if err != nil {
fmt.Println("JavaScript error.")
} else {
fmt.Printf("Hostname (from JavaScript): %q\n", val)
}
//gtk.MainQuit()
})
}
})
glib.IdleAdd(func() bool {
web.LoadURI(url)
return false
})
}
func main() {
gtk.Init(nil)
crawlPage("https://www.google.com")
crawlPage("https://www.yahoo.com")
crawlPage("https://github.com")
crawlPage("http://deelay.me/2000/http://deelay.me/img/1000ms.gif")
gtk.Main()
}
It seems that creating a new WebView for each URL allows them to load concurrently. Having glib.IdleAdd() running in a goroutine, as per the gotk3 example, doesn't seem to have any effect (although I'm only doing a visual benchmark):
go glib.IdleAdd(func() bool { // Works
web.LoadURI(url)
return false
})
However, trying to create a goroutine for each crawlPage() call ends in a panic:
go crawlPage("https://www.google.com") // Panics and shows stack trace
I can run web.RunJavaScript() in a goroutine without issue:
switch loadEvent {
case webkit2.LoadFinished:
fmt.Printf("Load finished for: %v\n", url)
go web.RunJavaScript("window.location.hostname", func(val *gojs.Value, err error) { // Works
if err != nil {
fmt.Println("JavaScript error.")
} else {
fmt.Printf("Hostname (from JavaScript): %q\n", val)
}
//gtk.MainQuit()
})
}
Best Method?
The current methods I can think of are:
Spawn new WebViews to crawl each page, as shown in the current code. Track how many WebViews are opened and either continually delete and create new ones, or reuse a set number created initially, to where all available resources on the machine are used. Would this be limited in terms of processor cores being used?
Basic idea of #1, but running the binary multiple times (instead of one gocrawler process running on the machine, have four) to utilize all cores/resources.
Run the GUI (gtk3) portion of the app in its own goroutine. I could then pass data to other goroutines which do their own heavy processing, such as searching through content.
What would actually be the best way to run this code concurrently, if possible, and max out performance?
Update
Method 1 and 2 are probably out of the picture, as I ran a test by spawning ~100 WebViews and they seem to load synchronously.
Related
currently im implementing a caching system using std lib http/net.
An endpoint parses a key and validates the request using the isOK(key) function. If it is not okay, one routine is send to makeSureNowOK(key,edpoint) to make sure, isOk(key) will return true at the next request.
My simplified solution looks as follows:
func (ep *Endpoint) Handler() func(...) {
for {
ep.mu.Lock()
// WAITINGROOM //
//lint:ignore SA2001 empty critical section
ep.mu.Unlock()
bytesBody, err := isOK(key)
if err != nil {
select {
case <-ep.pause:
go makeSureNowOK(key)
default:
}
} else {
...
return
}
}
}
func makeSureNowOK(key string, ep ...) {
ep.mu.Lock()
... do validation ..
ep.pause <- struct{}{}
ep.mu.Unlock()
}
So I'm using a mutex to block further executions and a channel using select to catch back routines that passed the isOK function.
Another Idea to not use mutex is to use a closed channel to allow routines to pass. But then I have to recreate it, to block routines. That feels somewhat hacky.
How would you approach this problem?
Edit: To make my question more clear: The code above is working like so. But I feel like creating a "Waitingroom" by calling .Unlock() immediately after .Lock() is not a clean way to achieve this. Do you have other suggestions?
An alternative way would be to use sync waitgroup, but then I'd have to call waitgroup.Wait (where right now im un/locking the mutex which will be before waitgroup.Add which is aswell bad.
I have a Go function exposed in a .wasm file and accessible from JS:
app.computePrimes = js.FuncOf(func(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, args []js.Value) interface{} {
resolve := args[0]
// Commented out because this Promise never fails
//reject := args[1]
// Now that we have a way to return the response to JS, spawn a goroutine
// This way, we don't block the event loop and avoid a deadlock
go func() {
app.console.Call("log", "starting")
for i := 2; i < 100000; i++ {
if big.NewInt(int64(i)).ProbablyPrime(20) && i > 20000 {
app.console.Call("log", i)
}
}
app.console.Call("log", "finishing")
resolve.Invoke("Done")
}()
// The handler of a Promise doesn't return any value
return nil
})
return js.Global().Get("Promise").New(handler)
})
Despite the fact that it returns a Promise and executes the CPU-bound part in a goroutine, on the Web side it feels like everything is running on the main UI thread. I have read a bit on the state of development of WWebAssembly, and it seems like multi-threaded workloads are not yet commonplace.
Is a Web worker the only preferred way to execute such tasks?
Yes, I think you answered this one yourself. As long as WASM does not support something like lite-threads / concurrency itself (which would make Go's support for WASM a lot more appealing) you are kind of stuck doing this yourself with web-workers or packages based on web-workers.
You probably found those already:
https://pspdfkit.com/blog/2020/webassembly-in-a-web-worker/
https://www.sitepen.com/blog/using-webassembly-with-web-workers
I have a server working with websocket connections and a database. Some users can connect by sockets, so I need to increment their "online" in db; and at the moment of their disconnection I also decrement their "online" field in db. But in case the server breaks down I use a local variable replica map[string]int of users online. So I need to postpone the server shutdown until it completes a database request that decrements all users "online" in accordance with my variable replica, because at this way socket connection doesnt send default "close" event.
I have found a package github.com/xlab/closer that handles some system calls and can do some action before program finished, but my database request doesnt work in this way (code below)
func main() {
...
// trying to handle program finish event
closer.Bind(cleanupSocketConnections(&pageHandler))
...
}
// function that handles program finish event
func cleanupSocketConnections(p *controllers.PageHandler) func() {
return func() {
p.PageService.ResetOnlineUsers()
}
}
// this map[string]int contains key=userId, value=count of socket connections
type PageService struct {
Users map[string]int
}
func (p *PageService) ResetOnlineUsers() {
for userId, count := range p.Users {
// decrease online of every user in program variable
InfoService{}.DecreaseInfoOnline(userId, count)
}
}
Maybe I use it incorrectly or may be there is a better way to prevent default program finish?
First of all executing tasks when the server "breaks down" as you said is quite complicated, because breaking down can mean a lot of things and nothing can guarantee clean up functions execution when something goes really bad in your server.
From an engineering point of view (if setting users offline on breakdown is so important), the best would be to have a secondary service, on another server, that receives user connection and disconnection events and ping event, if it receives no updates in a set timeout the service considers your server down and proceeds to set every user offline.
Back to your question, using defer and waiting for termination signals should cover 99% of cases. I commented the code to explain the logic.
// AllUsersOffline is called when the program is terminated, it takes a *sync.Once to make sure this function is performed only
// one time, since it might be called from different goroutines.
func AllUsersOffline(once *sync.Once) {
once.Do(func() {
fmt.Print("setting all users offline...")
// logic to set all users offline
})
}
// CatchSigs catches termination signals and executes f function at the end
func CatchSigs(f func()) {
cSig := make(chan os.Signal, 1)
// watch for these signals
signal.Notify(cSig, syscall.SIGKILL, syscall.SIGTERM, syscall.SIGINT, syscall.SIGQUIT, syscall.SIGHUP) // these are the termination signals in GNU => https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
// wait for them
sig := <- cSig
fmt.Printf("received signal: %s", sig)
// execute f
f()
}
func main() {
/* code */
// the once is used to make sure AllUsersOffline is performed ONE TIME.
usersOfflineOnce := &sync.Once{}
// catch termination signals
go CatchSigs(func() {
// when a termination signal is caught execute AllUsersOffline function
AllUsersOffline(usersOfflineOnce)
})
// deferred functions are called even in case of panic events, although execution is not to take for granted (OOM errors etc)
defer AllUsersOffline(usersOfflineOnce)
/* code */
// run server
err := server.Run()
if err != nil {
// error logic here
}
// bla bla bla
}
I think that you need to look at go routines and channel.
Here something (maybe) useful:
https://nathanleclaire.com/blog/2014/02/15/how-to-wait-for-all-goroutines-to-finish-executing-before-continuing/
How can one only attempt to acquire a mutex-like lock in go, either aborting immediately (like TryLock does in other implementations) or by observing some form of deadline (basically LockBefore)?
I can think of 2 situations right now where this would be greatly helpful and where I'm looking for some sort of solution. The first one is: a CPU-heavy service which receives latency sensitive requests (e.g. a web service). In this case you would want to do something like the RPCService example below. It is possible to implement it as a worker queue (with channels and stuff), but in that case it becomes more difficult to gauge and utilize all available CPU. It is also possible to just accept that by the time you acquire the lock your code may already be over deadline, but that is not ideal as it wastes some amount of resources and means we can't do things like a "degraded ad-hoc response".
/* Example 1: LockBefore() for latency sensitive code. */
func (s *RPCService) DoTheThing(ctx context.Context, ...) ... {
if s.someObj[req.Parameter].mtx.LockBefore(ctx.Deadline()) {
defer s.someObj[req.Parameter].mtx.Unlock()
... expensive computation based on internal state ...
} else {
return s.cheapCachedResponse[req.Parameter]
}
}
Another case is when you have a bunch of objects which should be touched, but which may be locked, and where touching them should complete within a certain amount of time (e.g. updating some stats). In this case you could also either use LockBefore() or some form of TryLock(), see the Stats example below.
/* Example 2: TryLock() for updating stats. */
func (s *StatsObject) updateObjStats(key, value interface{}) {
if s.someObj[key].TryLock() {
defer s.someObj[key].Unlock()
... update stats ...
... fill in s.cheapCachedResponse ...
}
}
func (s *StatsObject) UpdateStats() {
s.someObj.Range(s.updateObjStats)
}
For ease of use, let's assume that in the above case we're talking about the same s.someObj. Any object may be blocked by DoTheThing() operations for a long time, which means we would want to skip it in updateObjStats. Also, we would want to make sure that we return the cheap response in DoTheThing() in case we can't acquire a lock in time.
Unfortunately, sync.Mutex only and exclusively has the functions Lock() and Unlock(). There is no way to potentially acquire a lock. Is there some easy way to do this instead? Am I approaching this class of problems from an entirely wrong angle, and is there a different, more "go"ish way to solve them? Or will I have to implement my own Mutex library if I want to solve these? I am aware of issue 6123 which seems to suggest that there is no such thing and that the way I'm approaching these problems is entirely un-go-ish.
Use a channel with buffer size of one as mutex.
l := make(chan struct{}, 1)
Lock:
l <- struct{}{}
Unlock:
<-l
Try lock:
select {
case l <- struct{}{}:
// lock acquired
<-l
default:
// lock not acquired
}
Try with timeout:
select {
case l <- struct{}{}:
// lock acquired
<-l
case <-time.After(time.Minute):
// lock not acquired
}
I think you're asking several different things here:
Does this facility exist in the standard libray? No, it doesn't. You can probably find implementations elsewhere - this is possible to implement using the standard library (atomics, for example).
Why doesn't this facility exist in the standard library: the issue you mentioned in the question is one discussion. There are also several discussions on the go-nuts mailing list with several Go code developers contributing: link 1, link 2. And it's easy to find other discussions by googling.
How can I design my program such that I won't need this?
The answer to (3) is more nuanced and depends on your exact issue. Your question already says
It is possible to implement it as a worker queue (with channels and
stuff), but in that case it becomes more difficult to gauge and
utilize all available CPU
Without providing details on why it would be more difficult to utilize all CPUs, as opposed to checking for a mutex lock state.
In Go you usually want channels whenever the locking schemes become non-trivial. It shouldn't be slower, and it should be much more maintainable.
How about this package: https://github.com/viney-shih/go-lock . It use channel and semaphore (golang.org/x/sync/semaphore) to solve your problem.
go-lock implements TryLock, TryLockWithTimeout and TryLockWithContext functions in addition to Lock and Unlock. It provides flexibility to control the resources.
Examples:
package main
import (
"fmt"
"time"
"context"
lock "github.com/viney-shih/go-lock"
)
func main() {
casMut := lock.NewCASMutex()
casMut.Lock()
defer casMut.Unlock()
// TryLock without blocking
fmt.Println("Return", casMut.TryLock()) // Return false
// TryLockWithTimeout without blocking
fmt.Println("Return", casMut.TryLockWithTimeout(50*time.Millisecond)) // Return false
// TryLockWithContext without blocking
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
fmt.Println("Return", casMut.TryLockWithContext(ctx)) // Return false
// Output:
// Return false
// Return false
// Return false
}
PMutex from package https://github.com/myfantasy/mfs
PMutex implements RTryLock(ctx context.Context) and TryLock(ctx context.Context)
// ctx - some context
ctx := context.Background()
mx := mfs.PMutex{}
isLocked := mx.TryLock(ctx)
if isLocked {
// DO Something
mx.Unlock()
} else {
// DO Something else
}
I'm writing a package to control a Canon DSLR using their EDSDK DLL from Go.
This is a personal project for a photo booth to use at our wedding at my partners request, which I'll be happy to post on GitHub when complete :).
Looking at the examples of using the SDK elsewhere, it isn't threadsafe and uses thread-local resources, so I'll need to make sure I'm calling it from a single thread during usage. While not ideal, it looks like Go provides a "runtime.LockOSThread" function for doing just that, although this does get called by the core DLL interop code itself, so I'll have to wait and find out if that interferes or not.
I want the rest of the application to be able to call the SDK using a higher level interface without worrying about the threading, so I need a way to pass function call requests to the locked thread/Goroutine to execute there, then pass the results back to the calling function outside of that Goroutine.
So far, I've come up with this working example of using very broad function definitions using []interface{} arrays and passing back and forward via channels. This would take a lot of mangling of input/output data on every call to do type assertions back out of the interface{} array, even if we know what we should expect for each function ahead of time, but it looks like it'll work.
Before I invest a lot of time doing it this way for possibly the worst way to do it - does anyone have any better options?
package edsdk
import (
"fmt"
"runtime"
)
type CanonSDK struct {
FChan chan functionCall
}
type functionCall struct {
Function func([]interface{}) []interface{}
Arguments []interface{}
Return chan []interface{}
}
func NewCanonSDK() (*CanonSDK, error) {
c := &CanonSDK {
FChan: make(chan functionCall),
}
go c.BackgroundThread(c.FChan)
return c, nil
}
func (c *CanonSDK) BackgroundThread(fcalls <-chan functionCall) {
runtime.LockOSThread()
for f := range fcalls {
f.Return <- f.Function(f.Arguments)
}
runtime.UnlockOSThread()
}
func (c *CanonSDK) TestCall() {
ret := make(chan []interface{})
f := functionCall {
Function: c.DoTestCall,
Arguments: []interface{}{},
Return: ret,
}
c.FChan <- f
results := <- ret
close(ret)
fmt.Printf("%#v", results)
}
func (c *CanonSDK) DoTestCall([]interface{}) []interface{} {
return []interface{}{ "Test", nil }
}
For similar embedded projects I've played with, I tend to create a single goroutine worker that listens on a channel to perform all the work over that USB device. And any results sent back out on another channel.
Talk to the device with channels only in Go in a one-way exchange. LIsten for responses from the other channel.
Since USB is serial and polling, I had to setup a dedicated channel with another goroutine that justs picks items off the channel when they were pushed into it from the worker goroutine that just looped.