Go not using cpu fully - performance

I've been playing around with a simple raytracer in go that has been working pretty neatly so far. I'm using multiple goroutines to render different parts of the image, which then place their results into a shared film.
Against my expectations, my go code is still about 3 times slower than equivalent java code. Was that to be expected? Further, when inspecting the CPU-Usage in htop, I discovered that every core is only used to about 85%. Is that an issue with htop or is there a problem with my code? Here is the cpu profile of my application
I did set GOMAXPROCS as runtime.GOMAXPROCS(runtime.NumCPU()). The full code is on github.

I would guess that garbage collector is the problem. Maybe you are making a lot of unnecessary allocations. By using runtime.ReadMemStats you can find out how much time garbage collector has been running.
If this is the case then you must find a way to reduce memory allocations. By using pools of objects for example. Look at sync.Pool. Also there are few useful links that you can find via Google that explain how to reduce memory allocation. Look at this one for example.

Related

Is Xcode's debug navigator useless?

I am building an app in Xcode and am now deep into the memory management portion of the project. When I use Allocations and Leaks I seem to get entirely different results from what I see in Xcode's debug panel: particularly the debug panel seems to show much higher memory usage than what I see in Allocations and it also seems to highlight leaks that as far as I can tell (1) do not exist and (2) are confirmed to not exist by the Leaks tool. Is this thing useless, or even worse, misleading?
Here was a new one: today it told me I was using >1 GB of memory but its little memory meter read significantly <1 GB (and was still wrong if the Allocations data is accurate). Picture below.
UPDATE: I ran VM Tracker in a 38-minute session and it does appear virtual memory accounts for the difference between allocations / leaks and the memory gauge. Picture below. I'm not entirely sure how to think about this yet. Our game uses a very large number of textures that are swapped. I imagine this is common in most games of our scale (11 boards, 330 levels; each board and map screen has unique artwork).
You are probably using the Memory Gauge while running in the Simulator using a Debug build configuration. Both of those will give you misleading memory results. The only reliable way to know how memory is being managed is to run on a device using a Release build. Instruments uses the Release build configuration, so it's already going to be better than just running and using the Memory Gauge.
Moreover, it is a known flaw that the Xcode built-in memory tools, such as the Memory Debugger, can generate false positives for leaks.
However, Instruments has its flaws as well. My experience is that, for example, it fails to catch leaks generated during app startup. Another problem is that people don't always understand how to read its output. For example, you say:
the debug panel seems to show much higher memory usage than what I see in Allocations
Yes, but Allocations is not the whole story. You are probably failing to look at the VM allocations. Those are shown separately and often constitute the reason for high memory use (because they include the backing stores for images and the view rendering tree). The Memory Gauge does include virtual memory, so this alone might account for the "difference" you think you're seeing.
So, the answer to your question is: No, the Memory Gauge is not useless. It gives a pretty good idea of when you might need to be alert to a memory issue. But you are then expected to switch to Instruments for a proper analysis.

How could I make a Go program use more memory? Is that recommended?

I'm looking for option something similar to -Xmx in Java, that is to assign maximum runtime memory that my Go application can utilise. Was checking the runtime , but not entirely if that is the way to go.
I tried setting something like this with func SetMaxStack(), (likely very stupid)
debug.SetMaxStack(5000000000) // bytes
model.ExcelCreator()
The reason why I am looking to do this is because currently there is ample amount of RAM available but the application won't consume more than 4-6% , I might be wrong here but it could be forcing GC to happen much faster than needed leading to performance issue.
What I'm doing
Getting large dataset from RDBMS system , processing it to write out in excel.
Another reason why I am looking for such an option is to limit the maximum usage of RAM on the server where it will be ultimately deployed.
Any hints on this would greatly appreciated.
The current stable Go (1.10) has only a single knob which may be used to trade memory for lower CPU usage by the garbage collection the Go runtime performs.
This knob is called GOGC, and its description reads
The GOGC variable sets the initial garbage collection target percentage. A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. The default is GOGC=100. Setting GOGC=off disables the garbage collector entirely. The runtime/debug package's SetGCPercent function allows changing this percentage at run time. See https://golang.org/pkg/runtime/debug/#SetGCPercent.
So basically setting it to 200 would supposedly double the amount of memory the Go runtime of your running process may use.
Having said that I'd note that the Go runtime actually tries to adjust the behaviour of its garbage collector to the workload of your running program and the CPU processing power at hand.
I mean, that normally there's nothing wrong with your program not consuming lots of RAM—if the collector happens to sweep the garbage fast enough without hampering the performance in a significant way, I see no reason to worry about: the Go's GC is
one of the points of the most intense fine-tuning in the runtime,
and works very good in fact.
Hence you may try to take another route:
Profile memory allocations of your program.
Analyze the profile and try to figure out where the hot spots
are, and whether (and how) they can be optimized.
You might start here
and continue with the gazillion other
intros to this stuff.
Optimize. Typically this amounts to making certain buffers
reusable across different calls to the same function(s)
consuming them, preallocating slices instead of growing them
gradually, using sync.Pool where deemed useful etc.
Such measures may actually increase the memory
truly used (that is, by live objects—as opposed to
garbage) but it may lower the pressure on the GC.

"Well-parallelized" algorithm not sped up by multiple threads

I'm sorry to ask a question one a topic that I know so little about, but this idea has really been bugging me and I haven't been able to find any answers on the internet.
Background:
I was talking to one of my friends who is in computer science research. I'm in mostly ad-hoc development, so my understanding of a majority of CS concepts is at a functional level (I know how to use them rather than how they work). He was saying that converting a "well-parallelized" algorithm that had been running on a single thread into one that ran on multiple threads didn't result in the processing speed increase that he was expecting.
Reasoning:
I asked him what the architecture of the computer he was running this algorithm on was, and he said 16-core (non-virtualized). According to what I know about multi-core processors, the processing speed increase of an algorithm running on multiple cores should be roughly proportional to how well it is parallelized.
Question:
How can an algorithm that is "well-parallelized" and programmed correctly to run on a true multi-core processor not run several times more quickly? Is there some information that I'm missing here, or is it more likely a problem with the implementation?
Other stuff: I asked if the threads were possibly taking up more power than any individual core had available and apparently each core runs at 3.4 GHz. This is much more than the algorithm should need, and when diagnostics are run the cores aren't maxed out during runtime.
It is likely sharing something. What is being shared may not be obvious.
One of the most common non-obvious shared resources is CPU cache. If the threads are updating the same cache line that cache line has to bounce between CPUs, slowing everything down.
That can happen because of accessing (even read-only) variables which are near to each other in memory. If all accesses are read-only it is OK, but if even one CPU is writing to that cache line it will force a bounce.
A brute-force method of fixing this is to put shared variables into structures that look like:
struct var_struct {
int value;
char padding[128];
};
Instead of hard-coding 128 you could research what system parameter or preprocessor macros define the cache-line size for your system type.
Another place that sharing can take place is inside system calls. Even seemingly innocent functions might be taking global locks. I seem to recall reading about Linux fixing an issue like this a while back with locks on the functions that return process and thread identifiers and parent identifiers.
Performance versus number of cores is often a S-like curve - first it obviously increases but as locking, shared cache and the like take they debt the further cores do not add so much and even may degrade. Hence nothing mysterious. If we would know more details about the algorithm it may be possible to find an idea to speed it up.

Find memory leak in very complex Ruby app

everyone!
It's nice to work with Ruby and write some code. But in past of this week, i notice that we have some problem in our application. Memory usage is growing like O(x*3) function.
Our application very complex, it is based on EventMachine and other external libs. Even more, it is running under amd64 bit version of FreeBSD using Ruby 1.8.7-p382
I'v tried to research by myself the way how find memory leak in our app.
I've found many tools and libs, but they doesn't work under FreeBSD'64bit and I have no idea how step up to find leaks in huge ruby application. It's OK, if you have few files with 200-300 lines of code, but here you have around 30 files with average 200-300 line's of code.
I just realize, i need too much of time to find those leaks, doing stupid actions: believe/research/assume that some of part of this code is may be actually leaking and wrap some tracking code, like using ruby-prof gem technice. But it's so painfully slow way, because as i said we have too much of code.
So, my question is how to find memory leak in very complex Ruby app and not put all my life into this work?
Thx in advance
One thing to try, even though it can massively degrade performance, is to manually trigger the garbage collector by calling GC.start every so often. How often is kind of subjective, as the more you run it the slower the app, and the less you run it the higher the memory footprint.
For whatever reason, the garbage collector may go on vacation from time to time, presumably not wanting to interfere if there is some heavy processing going on. As such you may have to manually call to have your trash taken away.
One way to avoid creating trash is to use memory more efficiently. Don't create hashes when arrays will do the job, don't create arrays when a single string will suffice, and so on. It will be important to profile your application to see what kind of objects are cluttering up your heap before you just start hacking away randomly.
If you can, try and use 1.9.2 which has made significant gains in terms of memory management. Ruby Enterprise Edition is also an option if you need 1.8.7 compatibility, as it's essentially a better garbage collector for that version.
How hard would it be to run your app on a linux box? If you don't have the same memory problems there, it is probably something specific with your ruby runtime. If you do have the same problems, you can use all the tools and libs that are linux only.
Another alternative - can you wrap your unit tests with some memory tracking code? Most unit test frameworks make it easy to add some code before/after each test. Or you could just run each test 1000000000 times and see if the memory goes out of control? if it does, you know something that happens in that test is causing the leak, and you can continue to isolate the problem.
Have you tried counting the number of objects you have, using ObjectSpace.each_object? Although you're intending to use small batches, maybe you only have more objects that you think.
count = ObjectSpace.each_object() {}
# => 7216

Help w/ memory management...allocations shows no leaks but program leaks like crazy

So I have autoreleased/released every object that I alloc/init/copy...and the allocations instrument seems to show minimal leaks...however...my program's memory usage does not stop increasing. I have included a screenshot of my allocations run (I have run allocations for longer but it remains relatively constant...it certainly does not compare to the amount the program gains when actually running. When running my program it will double in memory over the course of about 10 hours. The memory drastically increases in the first 5 minutes however (2-3MB), and just keeps on going. I don't understand why allocations would remain constant when running in instruments but my program would just keep gaining memory when actually run.
Since I can't post images yet...here is the link to the screenshot:
allocations run
UPDATE: Here are some screenshots from my memory heapshot analysis...I am not allocating these objects explicitly and don't really know where they are coming from. Almost all of them have their source with something similar to the second screenshot details on the right (lots of HTTPs and URLs in the call tree). Anybody know where these are coming from? I know I've read about some NSURLConnection leaks but I have tried all of the cache clearing that those suggest to no avail. Thanks for all the help so far!
memory heap analysis 1
memory heap analysis 2
Try heapshots.
Are you running with different environment variables when you run in different environments?
For example, you could have NSZombie enabled when you launch your app (causing all your objects to not be free'd) but not when you run in Instruments?
Just for a sanity check - How are you determining memory usage? You say that memory usage keeps going up, but not when you run in Instruments. Given that Instruments is a reliable way of measuring memory usage (the most reliable way?) this sounds a little odd - a bit like saying memory keeps going up except when i try to measure it.
If you are using autoreleased objects (like [NSString stringWithFormat:]) in a loop the pool won't be drained until that loop is exited and the program is allowed to complete the main event loop, at which point the autorelease pool is drained and a new one is instantiated.
If you have code like this the solution is to instantiate a new auto release pool before entering your loop, then draining it periodically during your loop (and reinstantiating the auto release pool after you drain it).
You can use Instruments to find out the location of where your allocations are originating. When running Instruments in the Allocation mode:
Move your mouse over the Category field in the Object Summary
Click on the Grey Circle with an arrow which appears next to the field name
This will bring up a list of locations where the objects in that category have been instantiated from, and the stats of how many allocations each have made.
If your memory usage is rising (but not leaking) you should be able to see where that memory was created, and then track down why it is hanging around.
This tool is also very useful in reducing your memory profile for mobile applications.

Resources