Creating call graph - go

I am looking for a possibility to generate a call graph for Go projects. Something similar to Doxygen's diagram functionality for C++ classes (with the option CALL_GRAPH=YES).
So far I found
http://saml.rilspace.org/profiling-and-creating-call-graphs-for-go-programs-with-go-tool-pprof
or
http://blog.golang.org/profiling-go-programs
This samples the call stack of your program 100 times per second while the program is running and creates a graph useful for profiling. If your program spends most of its time in functions not relevant to you, I found this solution not very usefull.
Then there is this:
https://godoc.org/golang.org/x/tools/go/callgraph/static
which from its description sounds like what I would need, but there seem to be no docs and I don't understand how to use it.
I also found
https://github.com/davecheney/graphpkg/blob/master/README.md
and
https://github.com/paetzke/go-dep-graph/blob/master/README.org
but they create only dependency graphs.

Take a look here:
http://dave.cheney.net/2014/10/22/simple-profiling-package-moved-updated
https://github.com/pkg/profile
func main() {
defer profile.Start(profile.CPUProfile, profile.ProfilePath(".")).Stop()
// Rest of program
}
Build and run your program as per normal. You'll see the profiling hook
mentioned:
2015/07/12 09:02:02 profile: cpu profiling enabled, cpu.pprof
Run your program (bench it, run through it, etc) to generate the profile during
runtime. Once you've hit what you want, quit and then generate the call-graph:
go tool pprof -pdf -output cgraph.pdf $YOURPROGBINARY cpu.pprof
You can also run go tool pprof $YOURPROGBINARY cpu.pprof to get an
interactive prompt where you can call top10 or web to generate an svg. Type
help at the pprof prompt to get a list of commands.
e.g. - here's the CPU profile for a buffer pool implementation I wrote:
~/Desktop go tool pprof poolio cpu.pprof
Entering interactive mode (type "help" for commands)
(pprof) top5
24770ms of 35160ms total (70.45%)
Dropped 217 nodes (cum <= 175.80ms)
Showing top 5 nodes out of 74 (cum >= 650ms)
flat flat% sum% cum cum%
12520ms 35.61% 35.61% 12520ms 35.61% runtime.mach_semaphore_wait
9300ms 26.45% 62.06% 9360ms 26.62% syscall.Syscall
1380ms 3.92% 65.98% 2120ms 6.03% encoding/json.(*encodeState).string
1030ms 2.93% 68.91% 1030ms 2.93% runtime.kevent
540ms 1.54% 70.45% 650ms 1.85% runtime.mallocgc
And here's a quick way to generate a PNG from the prompt:
(pprof) png > graph.png
Generating report in graph.png
Which outputs this:

You were close with …/x/tools/go/callgraph/static. I'm pretty sure go install golang.org/x/tools/cmd/callgraph is what you want. Once installed run it without arguments to see it's full help/usage.
(In general, the things under …/x/tools/ are somewhat reusable packages with command line front-ends living under …/x/tools/cmd, you can install them all with go install golang.org/x/tools/cmd/..., the literal /... matches all sub-packages).
E.g. running just callgraph produces usage output that starts with:
callgraph: display the the call graph of a Go program.
Usage:
callgraph [-algo=static|cha|rta|pta] [-test] [-format=...] <args>...
Flags:
-algo Specifies the call-graph construction algorithm, one of:
static static calls only (unsound)
cha Class Hierarchy Analysis
rta Rapid Type Analysis
pta inclusion-based Points-To Analysis
The algorithms are ordered by increasing precision in their
treatment of dynamic calls (and thus also computational cost).
RTA and PTA require a whole program (main or test), and
include only functions reachable from main.
-test Include the package's tests in the analysis.
-format Specifies the format in which each call graph edge is displayed.
One of:
digraph output suitable for input to
golang.org/x/tools/cmd/digraph.
graphviz output in AT&T GraphViz (.dot) format.
It can produce arbitrary formatted output (using Go's template syntax) or graphviz or digraph output. The last is a tool you can install with go install golang.org/x/tools/cmd/digraph (and once again, full/help usage is seen by running it without arguments) and can answer queries about arbitrary directed graphs (including call graphs obviously).

Another approach, which does use golang.org/x/tools go/callgraph is the ofabry/go-callvis project:
(Go 1.13+)
The purpose of this tool is to provide developers with a visual overview of a Go program using data from call graph and its relations with packages and types.
support for Go modules!
focus specific package in the program
click on package to quickly switch the focus using interactive viewer
group functions by package and/or methods by type
filter packages to specific import path prefixes
ignore funcs from standard library
omit various types of function calls
Example (based on this Go code project):
How it works
It runs pointer analysis to construct the call graph of the program and uses the data to generate output in dot format, which can be rendered with Graphviz tools.
To use the interactive view provided by a web server that serves SVG images of focused packages, you can simply run:
go-callvis <target package>
HTTP server is listening on http://localhost:7878/ by default, use option -http="ADDR:PORT" to change HTTP server address.

I used golang callgraph recently, and I build a web tool with python + callgraph called CallingViewer here: https://github.com/fiefdx/CallingViewer , it maybe rough, but it work, the screenshot at below:
screenshot of CallingViewer

Related

Profiling a Go program spanning several runs

I want to profile a Go program's performance between different runs with different OS-level settings. I'm aware that I can get profiles for single runs via $ go test -cpuprofile cpu.prof -memprofile mem.prof -bench .. However I don't know how to aggregate the information in such a way that I can compare the results either visually or programmatically.
To present a sketch in Xonsh scripting language, which is a creole between Python and Bash. However I'm happy to accept suggestion written in pure Bash as well.
for i in range(n):
change_system_settings()
# Run 'go test' and save the results in cpu0.prof, cpu1.prof, cpu2.prof etc.
#(f'go test -cpuprofile cpu{i}.prof -memprofile mem{i}.prof -bench .'.split())
The script changes the system settings and runs the program through profiler n times. Now, after the process I'm left with possibly dozens of individual .prof files. I would like to have a holistic view of them, compare the memory and CPU usage between runs and even run numeric tests to see which run was optimal.
If you use GoLang's pprof to profile your Go program, the library has a Merge method that merges multiple pprof output files into one.
The library is github.com/google/pprof, so you just import it in a Go script:
import ('github.com/google/pprof/profile')
Then you'll need to load all your pprof files into one array. If we consider that you did that and you have all your files loaded (using os.Open()) in an array called allFiles, you merge them using the following method:
result, err := profile.Merge(allFiles)
Then you output the merged data into a new file, using os.OpenFile(...), writing to this file, then closing it.
I haven't tested this right now honestly, but I remember this is how we did it a long time ago. So technically, you could invoke this golang script after your for loop is done in your test script.
Documentation: https://github.com/google/pprof/blob/master/doc/README.md

running external program in TCL

After developing an elaborate TCL code to do smoothing based on Gabriel Taubin's smoothing without shape shrinkage, the code runs extremely slow. This is likely due to the size of unstructured grid I am smoothing. I have to use TCL because the grid generator I am using is Pointwise and Pointwise's "macro language" is TCL based. I'm still a bit new to this, but is there a way to run an external code from TCL where TCL sends the data to the software, the software runs the smoothing operation, and output is sent back to TCL to update the internal data inside the Pointwise grid generation tool? I will be writing the smoothing tool in another language which is significantly faster.
There are a number of options to deal with code that "runs extremely show". I would start with determining how fast it must run. Are we talking milliseconds, seconds, minutes, hours or days. Next it is necessary to determine which part is slow. The time command is useful here.
But assuming you have decided that more performance is necessary and you have some metrics for your current program so you will know if you are improving, here are some things to try:
Try to improve the existing code. If you are using the expr command, make sure your expressions are given to the command as a single argument enclosed in braces. Beginners sometimes forget this and the improvement can be substantial.
Use the critcl package to code parts of the program in "C". Critcl allows you to put "C" code directly into your Tcl program and have that code pulled out, compiled and loaded into your program.
Write a traditional "C" based Tcl extension. Tcl is very extensible and has a clean API for building extensions. There is sample code for extensions and source to many extensions is readily available.
Write a program to do the time consuming part of the job and execute it as a separate process and obtain the output back into your Tcl script. This is where the exec command comes in useful. Presumably you will have to write data out to some where the program can get it and read the output of the program back into your Tcl script. If you want to get fancy you can do two-way communications across a localhost TCP port. The set up in Tcl is quite simple. The "C" code in a program to do it is a bit more tedious, but many examples exist out on the Internet.
Which option to choose depends very much on how much improvement is required and the amount of code that must be improved. You haven't given us much idea what those things are in your case, so all I can offer is rather vague general solutions.
For a loadable module, you can write a Tcl extension. An example is here:
File Last Modified Time with Milliseconds Precision
Alternatively, just write your program to take input from a file. Have Tcl write the input data to the file, run the program, then collect the output from the external program.

Golang pprof full call graph

I'm kinda new to pprof. I've started CPU profiling, and after a bit of time checked the top25. This is what I got:
Showing top 25 nodes out of 174
flat flat% sum% cum cum%
1.01mins 21.92% 21.92% 1.10mins 23.83% time.Time.AppendFormat
0.26mins 5.56% 27.48% 0.26mins 5.56% type..eq.[65]runtime.sigTabT
0.23mins 5.07% 32.55% 0.23mins 5.07% type..hash.[3]runtime.symbol_key
0.15mins 3.14% 35.69% 0.15mins 3.14% type..hash.[9]string
...
I thought that's all cool, I just need to get rid of that time function. Then realized, I don't even use anything from the time pkg, so it must be either a third party lib, or one of the go internal functions.
So I've generated the graph with the -web flag, so I can see which function calls it, but it doesn't really show directly. Is there any way to track it down where it's coming from?
I've been using the following approach to see everything.
go tool pprof -http :9999 -edgefraction 0 -nodefraction 0 -nodecount 100000 cpu.prof
This can give you a huge graph which can be quite difficult to follow. To help with that you can click on the offending node in the web view and select 'Focus' from the 'Refine' menu in the top left corner. This provides a view of that node and all its callers and callees.
The key options to use in order to see everything are:
--nodecount=<n> Show at most so many nodes [default=80]
--nodefraction=<f> Hide nodes below <f>*total [default=.005]
--edgefraction=<f> Hide edges below <f>*total [default=.001]
You can also use -focus on the command line to speed up rendering of large graphs.
--focus=<regexp> Focus on nodes matching <regexp>

How to detect overlapping communities using snap BIGCLAM method?

I want to detect overlapping communities in a network. I have file trust.txt whose format is like this- [user-id (trustor), user-id (trustee)]. I want to run snap BIGCLAM algorithm for community detection. How I can run snap BIGCLAM method to get the output as a community. I saw this link https://github.com/snap-stanford/snap/tree/master/examples/bigclam but how I can compile and run this code to get the output.
This answer might be too late for you. Nevertheless, it might be helpful to others.
Once you download the whole snap-master, you need to perform make all, as stated there. This installs advanced features and the examples.
Then you can switch to the directory bigclam within examples and run make there. After that, you can run ./bigclam (on Linux), as stated in the Readme file.
Basically, you put there your prepared data (edge list with node indices; if your nodes have names, also that file is needed). You run it as per the Readme example.
./bigclam -o:'your_out_prefix' -i:'your_nodeids.edgelist' -c:1000

how to make golang execute a string

I am not asking to make golang do some sort of "eval" in the current context, just need it to take a input string (say, received from network) and execute it as a separate golang program.
In theory, when you run go run testme.go, it will read the content of testme.go into a string and parse, compile and execute it. Wonder if it is possible to call a go function to execute a string directly. Note that I have a requirement not to write the string into a file.
UPDATE1
I really want to find out if there is a function (in some go package) that serves as an entry point of go, in another word, I can call this function with a string as parameter, it will behave like go run testme.go, where testme.go has the content of that string.
AFAIK it cannot be done, and the go compiler has to write intermediate files, so even if you use go run and not go build, files are created for the sake of running the code, they are just cleaned up if necessary. So you can't run a go program without touching the disk, even if you manage to somehow make the compiler take the source not from a file.
For example, running strace on calling go run on a simple hello world program, shows among other things, the following lines:
mkdir("/tmp/go-build167589894", 0700)
// ....
mkdir("/tmp/go-build167589894/command-line-arguments/_obj/exe/", 0777)
// ... and at the end of things
unlink("/tmp/go-build167589894/command-line-arguments/_obj/exe/foo")
// ^^^ my program was called foo.go
// ....
// and eventually:
rmdir("/tmp/go-build167589894")
So you can see that go run does a lot of disk writing behind the scenes, just cleans up afterwards.
I suppose you can mount some tmpfs and build in it if you wish, but otherwise I don't believe it's possible.
I know that this question is (5 years) old but I wanted to say that actually, it is possible now, for anyone looking for an up-to-date answer.
The Golang compiler is itself written in Go so can theoretically be embedded in a Go program. This would be quite complicated though.
As a better alternative, there are projects like yaegi which are effectively a Go interpreter which can be embedded into Go programs.

Resources