TensorFlow object detection limiting memory and cpu usage - memory-management

I manage to run tensorflow pet example from the tutorial. I decided to use the slowest model (because I want to use for my own data). However, when I start the training it gets killed after running a bit. It used all my cpus (4) and all my memory 8GB. Do you know anyway I can limit the number of CPUs to 2 and limit the amount of memory used ? If I reduce the batch size ? My batch size is already 1.
I managed to run by reducing the resize:
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 300
max_dimension: 612
}
Thanks in advance.

Another idea to reduce memory usage is to reduce the queue sizes for input data. Specifically, in the object_detection/protos/train.proto file, you will see entries for batch_queue_capacity and prefetch_queue_capacity --- consider setting these fields explicitly in your config file to smaller numbers.

Related

Cache blocking brings no improvement for image filter on ARM

I'm experimenting with cache blocking. To do that, I implemented 2 convolution based smoothing algorithms. The gaussian kernel I'm using looks like this:
The first algorithm is just the simple double for loop, looping from left to right, top to bottom as shown below.
Image source: (https://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html)
In the second algorithm I tried to play with cache blocking by spliting the loops into chunks, which became something like the following. I used a BLOCK size of 512x512.
Image source: (https://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html)
I'm running the code on a raspberry pi 3B+, which has a Cortex-A53 with 32KB of L1 and 256KB of L2, I believe. I ran the two algorithms with different image sizes (2048x1536, 6000x4000, 12000x8000, 16000x12000. 8bit gray scale images). But across different image sizes, I saw the run time being very similar.
The question is shouldn't the first algorithm experience access latency which the second should not, especially when using large size image (like 12000x8000). Base on the description of cache blocking in this link, when processing data at the end of image rows using the 1st algorithm, the data at the beginning of the rows should have been evicted from the L1 cache. Using 12000x8000 size image as an example, since we are using 5x5 kernel, 5 rows of data is need, which is 12000x5=60KB, already larger than the 32KB L1 size. When we start processing data for a new row, 4 rows of previous data are still needed but they are likely gone in L1 so needs to be re-fetched. But for the second algorithm it shouldn't have this problem because the block size is small. Can anyone please tell me what am I missing?
I also profiled the algorithm using oprofile with the following data:
Algorithm 1
event
count
L1D_CACHE_REFILL
13,933,254
PREFETCH_LINEFILL
13,281,559
Algorithm 2
event
count
L1D_CACHE_REFILL
9,456,369
PREFETCH_LINEFILL
8,725,250
So it looks like the 1st algorithm does have more cache miss compared to the second, reflecting by the L1D_CACHE_REFILL counts. But it also has higher data prefetching rate, which maybe due to the simple behavior of the loop. So is the whole story of cache blocking not taking into account data prefetching?
Conceptually, you're right blocking will reduce cache misses by keeping the input window in cache.
I suspect the main reason you're not seeing a speedup is because the cache is prefetching from all 5 input rows. Your performance counters show more prefetch loads in the unblocked implementation. I suspect many textbook examples are out of date since cache prefetching has kept getting better. Intel's L2 cache can detect and prefetch from up to 16 linear streams about 10 years ago, I think.
Assume the filter takes 5 * 5 cycles. So that would be 20.8 ns = 25 / 1.2GHz on RPI3. The IO cost will be reading a 5 high column of new input pixels. The amortized IO cost will be 5 bytes / 20.8ns = 229 MiB/s, which is much less than the ~2 GiB/s DRAM bandwidth. So in theory, the relatively slow computation combined with prefetching (I'm not certain how effective) means that memory access isn't a bottleneck.
Try increasing the filter height. The cache can only detect and prefetch from a certain # streams. Or try vectorizing the computation so that memory access becomes the bottleneck.

How to get detailed memory breakdown in the TensorFlow profiler?

I'm using the new TensorFlow profiler to profile memory usage in my neural net, which I'm running on a Titan X GPU with 12GB RAM. Here's some example output when I profile my main training loop:
==================Model Analysis Report======================
node name | requested bytes | ...
Conv2DBackpropInput 10227.69MB (100.00%, 35.34%), ...
Conv2D 9679.95MB (64.66%, 33.45%), ...
Conv2DBackpropFilter 8073.89MB (31.21%, 27.90%), ...
Obviously this adds up to more than 12GB, so some of these matrices must be in main memory while others are on the GPU. I'd love to see a detailed breakdown of what variables are where at a given step. Is it possible to get more detailed information on where various parameters are stored (main or GPU memory), either with the profiler or otherwise?
"Requested bytes" shows a sum over all memory allocations, but that memory can be allocated and de-allocated. So just because "requested bytes" exceeds GPU RAM doesn't necessarily mean that memory is being transferred to CPU.
In particular, for a feedforward neural network, TF will normally keep around the forward activations, to make backprop efficient, but doesn't need to keep the intermediate backprop activations, i.e. dL/dh at each layer, so it can just throw away these intermediates after it's done with these. So I think in this case what you care about is the memory used by Conv2D, which is less than 12 GB.
You can also use the timeline to verify that total memory usage never exceeds 12 GB.

spark.shuffle.safetyFraction and spark.storage.safetyFraction difference

I am studying this article on Apache Spark architecture for sometime now.
There are two safety fractions as per description:
spark.shuffle.safetyFraction and spark.storage.safetyFraction which are given as 0.8 and 0.9 of JVM respectively.
Shuffle takes 0.2 of spark.shuffle.safetyFraction whereas storage takes 0.6 of spark.storage.safetyFraction.
The image given is however misleading.(One of the comments confirms this)
My question is:
How shuffle and storage can take 0.8 and 0.9 of same memory of JVM??
Are they shared? Then, in worst case what happens?
I googled but didn't get any documentation on these.
Any help is appreciated! :)
These configs are for internal use only and not exposed to the public, please refer to this pull request . You can set memoryFraction instead.
JVM heap can be divided into three parts:
Storage, Execution(Shuffle) and Other
Storage:in default 60% of heap size, controlled by spark.storage.memoryfraction(default value:0.6),while spark.storage.safetyFraction controls the real size we can allocate, which is 0.9 in default, which means we have to reserve 10% to avoid OOM, among the 90% of storage, 20% is used to unroll
Shuffle:20% of heap size by default, controlled by spark.shuffle.memoryfraction, however, for safety reasons, we cannot use all of them, so we introduced another parameter to control it, which is spark.shuffle.safetyFraction, in default it is 0.8.
Other:reserved, 20% of heap size。

Redis: How can I check how much memory used in real time?

I want check how much memory used in real time, for instance, each time I set or insert some data, I want to know how much memory increased and how much used totally.
I try to use INFO command, and check the used_memory or used_memory_* property would work or not, but sorry I found it only show the memory allotted by the system, cuz each time I check it after I insert new data, they still stay the same
Is there any way I can check the real time memory used in Redis?
The used_memory field is what you are looking for. It is not the memory allocated by the system as you said, this is the memory given by the process memory allocator to Redis.
Example:
> info memory
...
used_memory:541368
...
> set y "titi"
OK
> info memory
...
used_memory:541448 # i.e. +80 bytes
...
> del y
(integer) 1
> info memory
...
used_memory:541368
...
Please note that Redis does a number of memory related optimizations. For instance, it is able to factorize values containing small integers. Or, if you append data to an existing string, the corresponding buffer will not grow at each append operation. So depending on these optimizations, the memory usage increase/decrease for a given set of operation is not always consistent.

Fortran array memory management

I am working to optimize a fluid flow and heat transfer analysis program written in Fortran. As I try to run larger and larger mesh simulations, I'm running into memory limitation problems. The mesh, though, is not all that big. Only 500,000 cells and small-peanuts for a typical CFD code to run. Even when I request 80 GB of memory for my problem, it's crashing due to insufficient virtual memory.
I have a few guesses at what arrays are hogging up all that memory. One in particular is being allocated to (28801,345600). Correct me if I'm wrong in my calculations, but a double precision array is 8 bits per value. So the size of this array would be 28801*345600*8=79.6 GB?
Now, I think that most of this array ends up being zeros throughout the calculation so we don't need to store them. I think I can change the solution algorithm to only store the non-zero values to work on in a much smaller array. However, I want to be sure that I'm looking at the right arrays to reduce in size. So first, did I correctly calculate the array size above? And second, is there a way I can have Fortran show array sizes in MB or GB during runtime? In addition to printing out the most memory intensive arrays, I'd be interested in seeing how the memory requirements of the code are changing during runtime.
Memory usage is a quite vaguely defined concept on systems with virtual memory. You can have large amounts of memory allocated (large virtual memory size) but only a small part of it actually being actively used (small resident set size - RSS).
Unix systems provide the getrusage(2) system call that returns information about the amount of system resources in use by the calling thread/process/process children. In particular it provides the maxmimum value of the RSS ever reached since the process was started. You can write a simple Fortran callable helper C function that would call getrusage(2) and return the value of the ru_maxrss field of the rusage structure.
If you are running on Linux and don't care about portability, then you may just open and read from /proc/self/status. It is a simple text pseudofile that among other things contains several lines with statistics about the process virtual memory usage:
...
VmPeak: 9136 kB
VmSize: 7896 kB
VmLck: 0 kB
VmHWM: 7572 kB
VmRSS: 6316 kB
VmData: 5224 kB
VmStk: 88 kB
VmExe: 572 kB
VmLib: 1708 kB
VmPTE: 20 kB
...
Explanation of the various fields - here. You are mostly interested in VmData, VmRSS, VmHWM and VmSize. You can open /proc/self/status as a regular file with OPEN() and process it entirely in your Fortran code.
See also what memory limitations are set with ulimit -a and ulimit -aH. You may be exceeding the hard virtual memory size limit. If you are submitting jobs through a distributed resource manager (e.g. SGE/OGE, Torque/PBS, LSF, etc.) check that you request enough memory for the job.

Resources