Golang pprof full call graph - go

I'm kinda new to pprof. I've started CPU profiling, and after a bit of time checked the top25. This is what I got:
Showing top 25 nodes out of 174
flat flat% sum% cum cum%
1.01mins 21.92% 21.92% 1.10mins 23.83% time.Time.AppendFormat
0.26mins 5.56% 27.48% 0.26mins 5.56% type..eq.[65]runtime.sigTabT
0.23mins 5.07% 32.55% 0.23mins 5.07% type..hash.[3]runtime.symbol_key
0.15mins 3.14% 35.69% 0.15mins 3.14% type..hash.[9]string
...
I thought that's all cool, I just need to get rid of that time function. Then realized, I don't even use anything from the time pkg, so it must be either a third party lib, or one of the go internal functions.
So I've generated the graph with the -web flag, so I can see which function calls it, but it doesn't really show directly. Is there any way to track it down where it's coming from?

I've been using the following approach to see everything.
go tool pprof -http :9999 -edgefraction 0 -nodefraction 0 -nodecount 100000 cpu.prof
This can give you a huge graph which can be quite difficult to follow. To help with that you can click on the offending node in the web view and select 'Focus' from the 'Refine' menu in the top left corner. This provides a view of that node and all its callers and callees.
The key options to use in order to see everything are:
--nodecount=<n> Show at most so many nodes [default=80]
--nodefraction=<f> Hide nodes below <f>*total [default=.005]
--edgefraction=<f> Hide edges below <f>*total [default=.001]
You can also use -focus on the command line to speed up rendering of large graphs.
--focus=<regexp> Focus on nodes matching <regexp>

Related

R307 Fingerprint Sensor working with more then 1000 fingerprints

I want to integrate fingerprint sensor in my project. For the instance I have shortlisted R307, which has capacity of 1000 fingerprints.But as project requirement is more then 1000 prints,so I will going to store print inside the host.
The procedure I understand by reading the datasheet for achieving project requirements is :
I will register the fingerprint by "GenImg".
I will download the template by "upchr"
Now whenever a fingerprint come I will follow the step 1 and step 2.
Then start some sort of matching algorithm that will match the recently downloaded template file with
the template file stored in database.
So below are the points for which I want your thoughts
Is the procedure I have written above is correct and optimized ?
Is matching algorithm is straight forward like just comparing or it is some sort of tricky ? How can
I implement that.Please suggest if some sort of library already exist.
The sensor stores the image in 256 * 288 pixels and if I take this file to host
at maximum data rate it takes ~5(256 * 288*8/115200) seconds. which seems very
large.
Thanks
Abhishek
PS: I just mentioned "HOST" from which I going to connect sensor, it can be Arduino/Pi or any other compute device depends on how much computing require for this task, I will select.
You most probably figured it out yourself. But for anyone stumbling here in future.
You're correct for the most part.
You will take finger image (GenImg)
You will then generate a character file (Img2Tz) at BufferID: 1
You'll repeat the above 2 steps again, but this time store the character file in BufferID: 2
You're now supposed to generate a template file by combining those 2 character files (RegModel).
The device combines them for you, and stores the template in both the character buffers
As a last step; you need to store this template in your storage (Store)
For searching the finger: You'll take finger image once, generate a character file in BufferID : 1 and search the library (Search). This performs a linear search and returns the finger id along with confidence score.
There's also another method (GR_Identify); does all of the above automatically.
The question about optimization isn't applicable here, you're using a 3rd party device and you have to follow the working instructions whether it's optimized or not.
The sensor stores the image in 256 * 288 pixels and if I take this file to host at maximum data rate it takes ~5(256 * 288*8/115200) seconds. which seems very large.
I don't really get what you mean by this, but the template file ( that you intend to upload to your host ) is 512 bytes, I don't think it should take much time.
If you want an overview of how this system is implemented; Adafruit's Library is a good reference.

How to detect overlapping communities using snap BIGCLAM method?

I want to detect overlapping communities in a network. I have file trust.txt whose format is like this- [user-id (trustor), user-id (trustee)]. I want to run snap BIGCLAM algorithm for community detection. How I can run snap BIGCLAM method to get the output as a community. I saw this link https://github.com/snap-stanford/snap/tree/master/examples/bigclam but how I can compile and run this code to get the output.
This answer might be too late for you. Nevertheless, it might be helpful to others.
Once you download the whole snap-master, you need to perform make all, as stated there. This installs advanced features and the examples.
Then you can switch to the directory bigclam within examples and run make there. After that, you can run ./bigclam (on Linux), as stated in the Readme file.
Basically, you put there your prepared data (edge list with node indices; if your nodes have names, also that file is needed). You run it as per the Readme example.
./bigclam -o:'your_out_prefix' -i:'your_nodeids.edgelist' -c:1000

Creating call graph

I am looking for a possibility to generate a call graph for Go projects. Something similar to Doxygen's diagram functionality for C++ classes (with the option CALL_GRAPH=YES).
So far I found
http://saml.rilspace.org/profiling-and-creating-call-graphs-for-go-programs-with-go-tool-pprof
or
http://blog.golang.org/profiling-go-programs
This samples the call stack of your program 100 times per second while the program is running and creates a graph useful for profiling. If your program spends most of its time in functions not relevant to you, I found this solution not very usefull.
Then there is this:
https://godoc.org/golang.org/x/tools/go/callgraph/static
which from its description sounds like what I would need, but there seem to be no docs and I don't understand how to use it.
I also found
https://github.com/davecheney/graphpkg/blob/master/README.md
and
https://github.com/paetzke/go-dep-graph/blob/master/README.org
but they create only dependency graphs.
Take a look here:
http://dave.cheney.net/2014/10/22/simple-profiling-package-moved-updated
https://github.com/pkg/profile
func main() {
defer profile.Start(profile.CPUProfile, profile.ProfilePath(".")).Stop()
// Rest of program
}
Build and run your program as per normal. You'll see the profiling hook
mentioned:
2015/07/12 09:02:02 profile: cpu profiling enabled, cpu.pprof
Run your program (bench it, run through it, etc) to generate the profile during
runtime. Once you've hit what you want, quit and then generate the call-graph:
go tool pprof -pdf -output cgraph.pdf $YOURPROGBINARY cpu.pprof
You can also run go tool pprof $YOURPROGBINARY cpu.pprof to get an
interactive prompt where you can call top10 or web to generate an svg. Type
help at the pprof prompt to get a list of commands.
e.g. - here's the CPU profile for a buffer pool implementation I wrote:
~/Desktop go tool pprof poolio cpu.pprof
Entering interactive mode (type "help" for commands)
(pprof) top5
24770ms of 35160ms total (70.45%)
Dropped 217 nodes (cum <= 175.80ms)
Showing top 5 nodes out of 74 (cum >= 650ms)
flat flat% sum% cum cum%
12520ms 35.61% 35.61% 12520ms 35.61% runtime.mach_semaphore_wait
9300ms 26.45% 62.06% 9360ms 26.62% syscall.Syscall
1380ms 3.92% 65.98% 2120ms 6.03% encoding/json.(*encodeState).string
1030ms 2.93% 68.91% 1030ms 2.93% runtime.kevent
540ms 1.54% 70.45% 650ms 1.85% runtime.mallocgc
And here's a quick way to generate a PNG from the prompt:
(pprof) png > graph.png
Generating report in graph.png
Which outputs this:
You were close with …/x/tools/go/callgraph/static. I'm pretty sure go install golang.org/x/tools/cmd/callgraph is what you want. Once installed run it without arguments to see it's full help/usage.
(In general, the things under …/x/tools/ are somewhat reusable packages with command line front-ends living under …/x/tools/cmd, you can install them all with go install golang.org/x/tools/cmd/..., the literal /... matches all sub-packages).
E.g. running just callgraph produces usage output that starts with:
callgraph: display the the call graph of a Go program.
Usage:
callgraph [-algo=static|cha|rta|pta] [-test] [-format=...] <args>...
Flags:
-algo Specifies the call-graph construction algorithm, one of:
static static calls only (unsound)
cha Class Hierarchy Analysis
rta Rapid Type Analysis
pta inclusion-based Points-To Analysis
The algorithms are ordered by increasing precision in their
treatment of dynamic calls (and thus also computational cost).
RTA and PTA require a whole program (main or test), and
include only functions reachable from main.
-test Include the package's tests in the analysis.
-format Specifies the format in which each call graph edge is displayed.
One of:
digraph output suitable for input to
golang.org/x/tools/cmd/digraph.
graphviz output in AT&T GraphViz (.dot) format.
It can produce arbitrary formatted output (using Go's template syntax) or graphviz or digraph output. The last is a tool you can install with go install golang.org/x/tools/cmd/digraph (and once again, full/help usage is seen by running it without arguments) and can answer queries about arbitrary directed graphs (including call graphs obviously).
Another approach, which does use golang.org/x/tools go/callgraph is the ofabry/go-callvis project:
(Go 1.13+)
The purpose of this tool is to provide developers with a visual overview of a Go program using data from call graph and its relations with packages and types.
support for Go modules!
focus specific package in the program
click on package to quickly switch the focus using interactive viewer
group functions by package and/or methods by type
filter packages to specific import path prefixes
ignore funcs from standard library
omit various types of function calls
Example (based on this Go code project):
How it works
It runs pointer analysis to construct the call graph of the program and uses the data to generate output in dot format, which can be rendered with Graphviz tools.
To use the interactive view provided by a web server that serves SVG images of focused packages, you can simply run:
go-callvis <target package>
HTTP server is listening on http://localhost:7878/ by default, use option -http="ADDR:PORT" to change HTTP server address.
I used golang callgraph recently, and I build a web tool with python + callgraph called CallingViewer here: https://github.com/fiefdx/CallingViewer , it maybe rough, but it work, the screenshot at below:
screenshot of CallingViewer

Munin Graph - How To Set Max Upper Limit For mysql slowqueries & munin stats?

Wow, this is my very first post on stackoverflow! Been using results for years, but this is the first time I'm 100% stumped and decided to join!
I use Munin to monitor and graph stuff like CPU, Memory, Loads, etc. on my VPS.
Sometimes I get a huge statistical outlier data point that throws my graphs out of whack. I want to set the upper limit for these graphs to simply avoid having these outliers impact the rest of the data view.
After hours of digging and experimenting I was able to change the upper limit on Loads by doing the following:
cd /etc/munin/plugins
pico load
I changed: echo 'graph_args --base 1000 -l 0'
to: echo 'graph_args --base 1000 -l 0 -u 5 --rigid'
It worked perfectly!
Unfortunately I've tried everything to get munin stats processing time and mysql slowqueries to have an upper limit and can't figure it out!
Here is the line in mysql_slowqueries
echo 'graph_args --base 1000 -l 0'
... and for munin_stats
"graph_args --base 1000 -l 0\n",
I've tried every combo of -u and --upper-limit for both of those and nothing I do is impacting the display of the graph to show a max upper limit.
Any ideas on what I need to change those lines to so I can get a fixed upper limit max?
Thanks!
I highly encourage playing with the scripts, even though you run the risk of them being overwritten by an update. Just back them up and replace them if you think it's needed. If you have built or improved things, don't forget to share them with us on github: https://github.com/munin-monitoring/munin
When you set --upper-limit to 100 and your value is 110, your graph will run to 110. If you add --rigid, your graph scale will stay at 100, and the line will be clipped, which is what you wanted in this case.
Your mysql_slowqueries graph line should read something like (it puts a limit on 100):
echo 'graph_args --base 1000 -l 0 --upper-limit 100 --rigid'
Changing the scripts ist highly discouraged since with the next update they might be replaced by the package manager and ando your changes.
Munin gives you different ways to define limits on the settings. One the node itself as well as on the server.
You can find (sort of) an answer in the FAQ.
For me it worked really nicely to just create a file named /etc/munin/plugin-conf.d/load.conf with the following content:
[load]
env.load_warning 5
env.load_critical 10
Restart munin-node to apply the changes and on the next update of the graph you can see theat the "warning" and "critical" levels have been set by clocking on the load-graph in the overview (table below the graphs)

JMeter - saving results + configuring "graph results" time span

I am using JMeter and have 2 questions (I have read the FAQ + Wiki etc):
I use the Graph Results listener. It seems to have a fixed span, e.g. 2 hours (just guessing - this is not indicated anywhere AFAIK), after which it wraps around and starts drawing on same canvas from the left again. Hence after a long weekend run it only shows the results of last 2 hours. Can I configure that span or other properties (beyond the check boxes I see on the Graph Results listener itself)?
Can I save the results of a run and later open them? I know I can save the test plan or parts of it. I am unclear if I can save separately just the test results data, and later open them and perform comparisons etc. And furthermore can I open them with different listeners even if they weren't part of original test (i.e. I think of the test as accumulating data, and later on I want to view and interpret the data using different "viewers").
Thanks,
-- Shaul
Don't know about 1. Regarding 2: listeners typically have a configuration field for "Write All Data to a File", which lets you specify the file name. You can use the Simple Data Writer to store results efficiently for later analysis.
You can load results from a previous test into a visualizer by choosing "Write All Data to a File" and browsing for the file you wish to load. Somewhat counterintuitively, selecting a file for writing also loads that file into the visualizer and displays the results. Just make sure you don't run the test again while that file is selected, otherwise you will lose your saved test data. :-)
Well, I later found a JMeter group that was discussing the issue raised in my first question, and B.Ramann gave me an excellent suggestion to use instead a better graph found here.
-- Shaul

Resources