Appcelerator Studio - App crashes due to memory leak - appcelerator

I currently have a window that uses a view for a navigation view type purpose. In that navigation view, I have an array of views that are added in as well. Just recently, I've been getting a Out of memory error on the Galaxy S4.
[WARN] : TiUIScrollView: (main) [1647,1647] Scroll direction could not be determined based on the provided view properties. Default VERTICAL scroll direction being used. Use the 'scrollType' property to explicitly set the scrolling direction.
[WARN] : TiUIScrollView: (main) [173,1820] Scroll direction could not be determined based on the provided view properties. Default VERTICAL scroll direction being used. Use the 'scrollType' property to explicitly set the scrolling direction.
[WARN] : TiUIScrollView: (main) [240,2060] Scroll direction could not be determined based on the provided view properties. Default VERTICAL scroll direction being used. Use the 'scrollType' property to explicitly set the scrolling direction.
[INFO] : art: Clamp target GC heap from 135MB to 128MB
[INFO] : art: Clamp target GC heap from 143MB to 128MB
[INFO] : art: WaitForGcToComplete blocked for 27.099ms for cause Alloc
[INFO] : art: Alloc sticky concurrent mark sweep GC freed 0(0B) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 793us total 8.819ms
[INFO] : art: Clamp target GC heap from 143MB to 128MB
[INFO] : art: Alloc partial concurrent mark sweep GC freed 12(528B) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 793us total 25.634ms
[INFO] : art: WaitForGcToComplete blocked for 32.867ms for cause Background
[INFO] : art: Clamp target GC heap from 143MB to 128MB
[INFO] : art: Alloc concurrent mark sweep GC freed 57(13KB) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 701us total 39.062ms
[INFO] : art: Forcing collection of SoftReferences for 1101KB allocation
[INFO] : art: Clamp target GC heap from 143MB to 128MB
[INFO] : art: Alloc concurrent mark sweep GC freed 59(2504B) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 549us total 33.752ms
[ERROR] : art: Throwing OutOfMemoryError "Failed to allocate a 1128396 byte allocation with 278738 free bytes and 272KB until OOM"
[ERROR] : TiUIHelper: (main) [525,2585] Unable to load bitmap. Not enough memory: Failed to allocate a 1128396 byte allocation with 278738 free bytes and 272KB until OOM
[INFO] : art: Alloc sticky concurrent mark sweep GC freed 35(3KB) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 885us total 9.521ms
[INFO] : art: Clamp target GC heap from 143MB to 128MB
[INFO] : art: Alloc partial concurrent mark sweep GC freed 10(384B) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 671us total 26.031ms
[INFO] : art: Clamp target GC heap from 143MB to 128MB
[INFO] : art: Alloc concurrent mark sweep GC freed 5(192B) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 701us total 35.888ms
[INFO] : art: Forcing collection of SoftReferences for 1101KB allocation
[INFO] : art: Clamp target GC heap from 143MB to 128MB
[INFO] : art: Alloc concurrent mark sweep GC freed 3(96B) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 732us total 38.513ms
[ERROR] : art: Throwing OutOfMemoryError "Failed to allocate a 1128396 byte allocation with 278258 free bytes and 271KB until OOM"
[ERROR] : TiUIHelper: (main) [121,2706] Unable to load bitmap. Not enough memory: Failed to allocate a 1128396 byte allocation with 278258 free bytes and 271KB until OOM
[INFO] : art: Alloc sticky concurrent mark sweep GC freed 235(13KB) AllocSpace objects, 0(0B) LOS objects, 0% free, 127MB/128MB, paused 732us total 6.988ms
I can see it creating views, but it crashes. I have around 40-50 views. I'm trying to figure a way to combat this problem. Anyone have some tips, or pointers?

Never mind, fixed it! Downsized some of my images!

Related

CMS class unloading took much time

On large load was noticed large GC pause(400ms) for our application. During investigation it turns out, that pause happens on CMS Final Remark and class unloading phase took a lot more time, than other phases(10x-100X):
(CMS Final Remark)
[YG occupancy: 142247 K (294912 K)]
2019-03-13T07:38:30.656-0700: 24252.576:
[Rescan (parallel) , 0.0216770 secs]
2019-03-13T07:38:30.677-0700: 24252.598:
[weak refs processing, 0.0028353 secs]
2019-03-13T07:38:30.680-0700: 24252.601:
[class unloading, 0.3232543 secs]
2019-03-13T07:38:31.004-0700: 24252.924:
[scrub symbol table, 0.0371301 secs]
2019-03-13T07:38:31.041-0700: 24252.961:
[scrub string table, 0.0126352 secs]
[1 CMS-remark: 2062947K(4792320K)] 2205195K(5087232K), 0.3986822 secs]
[Times: user=0.63 sys=0.01, real=0.40 secs]
Total time for which application threads were stopped: 0.4156259 seconds,
Stopping threads took: 0.0014133 seconds
This pause always happens in first second of performance test, the duration of pause varies from 300ms to 400+ms.
Unfortunately, I have no access to server(it's under maintenance) and have only logs for several test runs. But when the server would available, I want to be prepared for further investigation, but I have no idea of what causes such a behavior.
My first thought was about Linux Huge pages, but we don't use them.
After more time in logs, I found following:
Heap after GC invocations=7969 (full 511):
par new generation total 294912K, used 23686K [0x0000000687800000, 0x000000069b800000, 0x000000069b800000)
eden space 262144K, 0% used [0x0000000687800000, 0x0000000687800000, 0x0000000697800000)
from space 32768K, 72% used [0x0000000699800000, 0x000000069af219b8, 0x000000069b800000)
to space 32768K, 0% used [0x0000000697800000, 0x0000000697800000, 0x0000000699800000)
concurrent mark-sweep generation total 4792320K, used 2062947K [0x000000069b800000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 282286K, capacity 297017K, committed 309256K, reserved 1320960K
class space used 33038K, capacity 36852K, committed 38872K, reserved 1048576K
}
Heap after GC invocations=7970 (full 511):
par new generation total 294912K, used 27099K [0x0000000687800000, 0x000000069b800000, 0x000000069b800000)
eden space 262144K, 0% used [0x0000000687800000, 0x0000000687800000, 0x0000000697800000)
from space 32768K, 82% used [0x0000000697800000, 0x0000000699276df0, 0x0000000699800000)
to space 32768K, 0% used [0x0000000699800000, 0x0000000699800000, 0x000000069b800000)
concurrent mark-sweep generation total 4792320K, used 2066069K [0x000000069b800000, 0x00000007c0000000, 0x00000007c0000000)
Metaspace used 282303K, capacity 297017K, committed 309256K, reserved 1320960K
class space used 33038K, capacity 36852K, committed 38872K, reserved 1048576K
}
Investigating GC pause happens between GC invocations 7969 and 7970. And the amount of used space in meta space is almost the same(it's actually increased)
So, It looks like it's not actually some stall classes which aren't used anymore(since no space was cleared) and it's not safe point reaching issue - since blocking of threads took small time(0.0014133).
How to investigate such case and what diagnostic information is required for proper preparedness.
Technical details
Centos5 + JDK8 + CMS GC with args:
-XX:+CMSClassUnloadingEnabled
-XX:CMSIncrementalDutyCycleMin=10
-XX:+CMSIncrementalPacing
-XX:CMSInitiatingOccupancyFraction=50
-XX:+CMSParallelRemarkEnabled
-XX:+DisableExplicitGC
-XX:InitialHeapSize=5242880000
-XX:MaxHeapSize=5242880000
-XX:MaxNewSize=335544320
-XX:MaxTenuringThreshold=6
-XX:NewSize=335544320
-XX:OldPLABSize=16
-XX:+UseCompressedClassPointers
-XX:+UseCompressedOops
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC

Go: memory issues

I need your wisdom.
I have a huge daemon written in Go. Some time ago a user reported that there might be a memory leak somewhere in the code.
I started investigating the issue. When primary code inspection didn't lead me to any clues about the nature of this leakage, I tried to focus on how my process works.
My idea was simple: if I failed to remove references to certain objects, my heap should be constantly growing. I wrote the following procedure to monitor heap:
func PrintHeap() {
ticker := time.NewTicker(time.Second * 5)
for {
<-ticker.C
st := &runtime.MemStats{}
runtime.ReadMemStats(st)
// From Golang docs: HeapObjects increases as objects are allocated
// and decreases as the heap is swept and unreachable objects are
// freed.
fmt.Println("Heap allocs:", st.Mallocs, "Heap frees:",
st.Frees, "Heap objects:", st.HeapObjects)
}
}
This procedure prints some info about heap each 5 seconds, including the number of objects currently allocated.
Now a few words about what the daemon does. It processes lines from some UDP input. Each line bears some info about a certain HTTP request and is parsed into a typical Go struct. This struct has some numeric and string fields, including one for request path. Then lots of things happens to this struct, but those things are irrelevant here.
Now, I set the input rate to 1500 lines per second, each line being rather short (you may read this as: with standard request path, /).
After running the application I could see that heap size stabilizes at some point of time:
Heap allocs: 180301314 Heap frees: 175991675 Heap objects: 4309639
Heap allocs: 180417372 Heap frees: 176071946 Heap objects: 4345426
Heap allocs: 180526254 Heap frees: 176216276 Heap objects: 4309978
Heap allocs: 182406470 Heap frees: 177496675 Heap objects: 4909795
Heap allocs: 183190214 Heap frees: 178248365 Heap objects: 4941849
Heap allocs: 183302680 Heap frees: 178958823 Heap objects: 4343857
Heap allocs: 183412388 Heap frees: 179101276 Heap objects: 4311112
Heap allocs: 183528654 Heap frees: 179181897 Heap objects: 4346757
Heap allocs: 183638282 Heap frees: 179327221 Heap objects: 4311061
Heap allocs: 185609758 Heap frees: 181330408 Heap objects: 4279350
When this state was reached, memory consumption stopped to grow.
Now, I changed my input in such a way that each line became more than 2k chars long (with a huge /AAAAA... request path), and that's where weird things started to happen.
Heap size grew drastically, but still became sort of stable after some time:
Heap allocs: 18353000513 Heap frees: 18335783660 Heap objects: 17216853
Heap allocs: 18353108590 Heap frees: 18335797883 Heap objects: 17310707
Heap allocs: 18355134995 Heap frees: 18336081878 Heap objects: 19053117
Heap allocs: 18356826170 Heap frees: 18336182205 Heap objects: 20643965
Heap allocs: 18366029630 Heap frees: 18336925394 Heap objects: 29104236
Heap allocs: 18366122614 Heap frees: 18336937295 Heap objects: 29185319
Heap allocs: 18367840866 Heap frees: 18337205638 Heap objects: 30635228
Heap allocs: 18368909002 Heap frees: 18337309215 Heap objects: 31599787
Heap allocs: 18369628204 Heap frees: 18337362196 Heap objects: 32266008
Heap allocs: 18373482440 Heap frees: 18358282964 Heap objects: 15199476
Heap allocs: 18374488754 Heap frees: 18358330954 Heap objects: 16157800
But memory consumption grew liearly and never stopped. My question is: any ideas about what's going on?
I thought about memory fragmentation due to lots of huge objects, but actually I don't really know what to think.
You could try the go memory profiling tools.
First you need to alter your program so it provides a memory profile. There are several ways to this.
You can use package net/http/pprof see https://golang.org/pkg/net/http/pprof/ if it is ok for you to publish that endpoint.
You can use package runtime/pprof and make your program dump a memory profile to a known location in response to specific events like receiving a signal or something.
After that you can analyse the memory profile using the go tool pprof
which you can either invoke as go tool pprof <path/to/executable> <file> if you chose to dump a memory profile to a file or as go tool pprof <path/to/executable> http://<host>:<port>/debug/pprof/heap if you used net/http/pprof and use top5 to get the top 5 functions, which allocated most of your memory. You can than use the list command for specific functions to see which lines allocated how much memory.
Starting from that, you should be able to reason about the increase in memory
you are observing.
You can also read about this at https://blog.golang.org/profiling-go-programs which also describes how to profile your cpu usage. Just search for the word memprofile to jump to the relevant parts.

Reading Go gctrace output

I have gctrace output that looks like this:
gc 6 #48.155s 15%: 0.093+12360+0.32 ms clock, 0.18+7720/21356/3615+0.65 ms cpu, 11039->13278->6876 MB, 14183 MB goal, 8 P
I am not sure how to read the CPU times in particular. I understand that it is broken down into three phases (STW sweep termination, concurrent mark/scan, and STW mark termination), but I'm not sure what the + signs mean (i.e. 0.18+7720 and 3615+0.65). What do these + signs signify?
In your case, they look like assist and termination times;
// CPU time
0.18 : **STW** Sweep termination.
7720ms : Mark/Scan - Assist Time (GC performed in line with allocation).
21356ms : Mark/Scan - Background GC time.
3615ms : Mark/Scan - Idle GC time.
0.65ms : **STW** Mark termination.
I think it changes (or it may) over various Go versions and you can find more detailed info at runtime package docs.
Currently, it is:
gc # ##s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # P
where the fields are as follows:
gc # the GC number, incremented at each GC
##s time in seconds since program start
#% percentage of time spent in GC since program start
#+...+# wall-clock/CPU times for the phases of the GC
#->#-># MB heap size at GC start, at GC end, and live heap
# MB goal goal heap size
# P number of processors used
Example here
See also Interpreting GC trace output
gc 6 #48.155s 15%: 0.093+12360+0.32 ms clock,
0.18+7720/21356/3615+0.65 ms cpu, 11039->13278->6876 MB, 14183 MB goal, 8 P
gc 6
#48.155s since program start
15%: of time spent in GC since program start
0.093+12360+0.32 ms clock stop-the-world (STW) sweep termination + concurrent
mark and scan + and STW mark termination
0.18+7720/21356/3615+0.65 ms cpu (GC performed in
line with allocation), background GC time, and idle GC time
11039->13278->6876 MB heap size at GC start, at GC end, and live heap
8 P number of processors used

Why doesn't this Ruby program return off heap memory to the operating system?

I am trying to understand when memory allocated off the Ruby heap gets returned to the operating system. I understand that Ruby never returns memory allocated to it's heap but I am still not sure about the behaviour of off heap memory. i.e. those objects that don't fit into a 40 byte RVALUE.
Consider the following program that allocates some large strings and then forces a major GC.
require 'objspace'
STRING_SIZE = 250
def print_stats(msg)
puts '-------------------'
puts msg
puts '-------------------'
puts "RSS: #{`ps -eo rss,pid | grep #{Process.pid} | grep -v grep | awk '{ print $1,"KB";}'`}"
puts "HEAP SIZE: #{(GC.stat[:heap_sorted_length] * 408 * 40)/1024} KB"
puts "SIZE OF ALL OBJECTS: #{ObjectSpace.memsize_of_all/1024} KB"
end
def run
print_stats('START WORK')
#data=[]
600_000.times do
#data << " " * STRING_SIZE
end
print_stats('END WORK')
#data=nil
end
run
GC.start
print_stats('AFTER FORCED MAJOR GC')
Running this program with Ruby 2.2.3 on MRI it produces the following output. After a forced major GC, the heap size is as expected but RSS has not decreased significantly.
-------------------
START WORK
-------------------
RSS: 7036 KB
HEAP SIZE: 1195 KB
SIZE OF ALL OBJECTS: 3172 KB
-------------------
END WORK
-------------------
RSS: 205660 KB
HEAP SIZE: 35046 KB
SIZE OF ALL OBJECTS: 178423 KB
-------------------
AFTER FORCED MAJOR GC
-------------------
RSS: 164492 KB
HEAP SIZE: 35046 KB
SIZE OF ALL OBJECTS: 2484 KB
Compare these results to the following results when we allocate one large object instead of many smaller objects.
def run
print_stats('START WORK')
#data = " " * STRING_SIZE * 600_000
print_stats('END WORK')
#data=nil
end
-------------------
START WORK
-------------------
RSS: 7072 KB
HEAP SIZE: 1195 KB
SIZE OF ALL OBJECTS: 3170 KB
-------------------
END WORK
-------------------
RSS: 153584 KB
HEAP SIZE: 1195 KB
SIZE OF ALL OBJECTS: 149064 KB
-------------------
AFTER FORCED MAJOR GC
-------------------
RSS: 7096 KB
HEAP SIZE: 1195 KB
SIZE OF ALL OBJECTS: 2483 KB
Note the final RSS value. We seem to have freed all the memory we allocated for the big string.
I am not sure why the second example releases the memory but the first example doesn't as they are both allocating memory off the Ruby heap. This is one reference that could provide an explanation but I would be interested in explanations from others.
Releasing memory back to the kernel also has a cost. User space memory
allocators may hold onto that memory (privately) in the hope it can be
reused within the same process and not give it back to the kernel for
use in other processes.
#joanbm has a very good point here. His referenced article explains this pretty well :
Ruby's GC releases memory gradually, so when you do GC on 1 big chunk of memory pointed by 1 reference it releases it all, but when there is a lot of references, the GC will releases memory in smaller chuncks.
Several calls to GC.start will release more and more memory in the 1st example.
Here are 2 orther articles to dig deeper :
http://thorstenball.com/blog/2014/03/12/watching-understanding-ruby-2.1-garbage-collector/
https://samsaffron.com/archive/2013/11/22/demystifying-the-ruby-gc

What are reasons for "Cannot allocate memory" except of exceeding address space and memory fragmentation?

The problem is that in a 32-bit application on Mac OS X I receive an error
malloc: *** mmap(size=49721344) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
For the reference error code is in sys/errno.h:
#define ENOMEM 12 /* Cannot allocate memory */
The memory allocation pattern is like this:
First is allocated nearly 250MB of memory
Allocate 6 blocks of 32MB
Then 27 images each handled like this
Allocate 16MB (image bitmap is loaded)
Allocate 32MB, process it, free these 32MB
Again allocate 32MB, process it, free these 32MB
Free 16MB allocated in step 3.1
Free 4 blocks allocated in step 2 (2 blocks are still used)
Free 250MB block allocated in step 1
Allocate blocks of various size, total size doesn't exceed 250MB. And here I receive the mentioned memory allocation error
I've checked that none of these memory blocks is leaked, so I guess used memory at any given time stays below 1GB, which should be accessible on 32-bit system.
The second guess was memory fragmentation. But I've checked that all block in step 3 reuse same addresses. So I touch less than 1GB of memory - memory fragmentation should not be an issue.
Now I am completely lost what can be a reason for not allocating memory. Also everything works OK when I process less than 27 images. Here is part of heap command result before step 6 for 26 images:
Process 1230: 4 zones
Zone DefaultMallocZone_0x273000: Overall size: 175627KB; 29620 nodes malloced for 68559KB (39% of capacity); largest unused: [0x6f800000-8191KB]
Zone DispatchContinuations_0x292000: Overall size: 4096KB; 1 nodes malloced for 1KB (0% of capacity); largest unused: [0x2600000-1023KB]
Zone QuartzCore_0x884400: Overall size: 232KB; 7039 nodes malloced for 132KB (56% of capacity); largest unused: [0x3778ca0-8KB]
Zone DefaultPurgeableMallocZone_0x27f2000: Overall size: 4KB; 0 nodes malloced for 0KB (0% of capacity); largest unused: [0x3723000-4KB]
All zones: 36660 nodes malloced - 68691KB
And for 27 images:
Process 1212: 4 zones
Zone DefaultMallocZone_0x273000: Overall size: 167435KB; 30301 nodes malloced for 68681KB (41% of capacity); largest unused: [0x6ea51000-32372KB]
Zone DispatchContinuations_0x292000: Overall size: 4096KB; 1 nodes malloced for 1KB (0% of capacity); largest unused: [0x500000-1023KB]
Zone QuartzCore_0x106b000: Overall size: 192KB; 5331 nodes malloced for 101KB (52% of capacity); largest unused: [0x37f2f98-8KB]
Zone DefaultPurgeableMallocZone_0x30f8000: Overall size: 4KB; 0 nodes malloced for 0KB (0% of capacity); largest unused: [0x368f000-4KB]
All zones: 35633 nodes malloced - 68782KB
So what are other reasons for "Cannot allocate memory" and how can I diagnose them? Or probably I made a mistake ruling out mentioned reasons, then how can I check them again?
Turned out I've made a mistake checking that address space is not exhausted. Instead of using heap command I should have used vmmap. vmmap revealed that most of the memory is used by images mapped into the memory.

Resources