Can I reduce size of array in CUDA - memory-management

I have allocated memory for 3d array using cudaMalloc3D - after execution of first kernel I established that I do not need part of it.
For example in pseudo code :
A = [100,100,100]
kernel()// data of intrest is just in subrange of A
B = [10:20, 20:100, 50:80]// part that I need other entries I would like to have removed
... // new allocations
kernelb()...
The rest of memory I would like to free (or immidiately use to other arrays that I will need to allocate now)
I know that I can free array and reallocate - but It do not seem to the best option.
P.S.
By the way Is there a way to use cudaMallocAsync like cudaMalloc3D - I mean cudaMalloc3D makes it convienient to use 3d array and takes care for paddings.

The current CUDA API does not have realloc functionality.
It seems you already know the common workaround of cudaMalloc smaller array -> cudaMemcpy to smaller array -> cudaFree large array
In case you really need realloc, you could write your own allocator using GPU virtual memory management. https://developer.nvidia.com/blog/introducing-low-level-gpu-virtual-memory-management/

Related

How to fix Run Time Error '7' Out of memory in visual basic 6?

I am trying to ZIP a folder with sub folders and files in vb6. For that I read each file and store them one by one in byte array using Redim Preserve. But large folders having size larger than 130MB throw an Out of Memory error.I have 8 GB of RAM in my PC so it shouldn't be a problem.So, is this some limitation by visual basic 6 that we can't use more than 150MB memory?
'Length of a particular File is determined
lngFileLen = FileLen(a_strFilePath)
DoEvents
If lngFileLen <> 0 Then
m_lngPtr = m_lngPtr + lngFileLen
'Next line Throws error once m_lngPtr reaches around 150 MB
ReDim Preserve arrFileBuffer(1 To m_lngPtr)
First of all, VB6 arrays can only be resized to a maximum of 2,147,483,647 elements. However, since that's also the upper limit of a Long in VB6, it seems like that's unlikely to be the problem. However, even though it may be allowed to make an array that big, it's running in a 32-bit process, so it's still subject to the limit of 2GB of addressable memory for the whole process. Since the VB6 run-time has some overhead, it's using some of that memory for other things, and since your program is likely doing other things too, that will be using up some memory too.
In addition to that, when you create an array, the system has to find that number of bytes of contiguous memory. So, even when there is enough memory available, within the 2GB limit, if it's sufficiently fragmented, you can still get out of memory errors. For that reason, creating gigantic arrays is always a concern.
Next, you are using ReDim Preserve, which requires twice the memory. When you resize the array like that, what it actually has to do, under the hood, is create a second array of the new size and then copy all of the other data out of the old array into the new one. Once it's done copying all the data out of the source array, it can then delete it, but while it's performing the copy, it needs to hold both the old array and the new array in memory simultaneously. That means that in a best case scenario, even if there was no other allocated memory or fragmentation, the maximum memory size of an array that you could resize would be 1GB.
Finally, in your example, you never showed what the data type of the array was. If it's an array of bytes, you should be good, I think (where the memory size of the array would only be slightly more than it's length in elements). However, if, for instance, it's an array of strings or variants, then I believe that's going to require a minimum of 4 bytes per element, thereby more-than-quadrupling the memory size of the array.

Minimizing global memory reads in OpenCL with vectors?

Suppose my kernel takes 4 (or 3, or 2) unrelated float or double args, or that I want to access 4 separate floats from global memory. Will this cause 4 separate global memory accesses? Is accessing a single vector of 4 floats or doubles faster than accessing 4 separate ones? If so, am I better off packing them into a single vector and then, say, using #defines to reference the individual members?
If this does increase the performance, do I have to do it myself, or might the compiler be smart enough to automatically convert 4 separate float reads into a single vector for me? Is this what "auto-vectorization" is? I've seen auto-vectorization mentioned in a few documents, without detailed explanation of exactly what it does, except that it seems to be an optional performance optimization for CPUs only, not GPUs.
Using vectors depends on kernel itself. If you need all four values at same time (for example: at start of kernel, at start of loop), it's better to pack them, because they will be assigned during one read (Values in single vector are stored sequential).
On the other hand, when you need only some of the values, you can speed up execution by reading only what you need.
Another case is when you read them one by one, each reading divided by some computation (i.e. give GPU some time to fetch data).
Basically, this data read segments, behaves like buffer. If you have enough instances, number of reads is same (in optional cause) and what really counts is how well are these reads used.
Compiler often unpack these structures so only speedup is, that you have all variables nicely stored, so when you read, you fill them all up with one read and rest of buffer is used for another instance.
As example, I will use 128 bits wide bus and 4 floats (32 bits).
(32b * 4) / 128b = 1 instance/read
For scalar data types, there are N reads (N = number of variables), each read filling one variable in each instance up to the number of fetched variables.
32b / 128b = 4 instance/read
So in my examples, if you have 4 instances, there will always be at least 4 reads no matter what and only thing, you can do with this is cover fetching time by some computation, if it's even possible.

Memory management in Forth

So I'm just learning Forth and was curious if anyone could help me understand how memory management generally works. At the moment I only have (some) experience with the C stack-vs-heap paradigm.
From what I understand, one can allocate in the Dictionary, or on the heap. Is the Dictionary faster/preferred like the stack in C? But unlike in C, there aren't scopes and automatic stack reclamation, so I'm wondering if one only uses the dictionary for global data structures (if at all).
As far as the heap goes, is it pretty much like C? Is heap management a standard (ANS) concept, or is it implementation-defined?
It is not Dictionary, or on the heap - the equivalent of the heap is the dictionary. However, with the severe limitation that it acts more like a stack than a heap - new words are added to the end of the dictionary (allocation by ALLOT and freeing by FORGET or FREE (but freeing all newer words - acting more like multiple POPs)).
An implementation can control the memory layout and thus implement a traditional heap (or garbage collection). An example is A FORTH implementation of the Heap Data Structure for Memory Management (1984). Another implementation is Dynamic Memory Heaps for Quartus Forth (2000).
A lot is implementation dependent or extensions. For instance, the memory layout is often with the two block buffers (location by BLOCK and TIB), the text input buffer and values and low-level/primitive functions of the language, in the lowest portion, dictionary in the middle (growing upwards) and the return stack and the parameter stack at the top 1.
The address of the first available byte above the dictionary is returned by HERE (it changes as the dictionary expands).
There is also a scratchpad area above the dictionary (address returned by PAD) for temporarily storing data. The scratchpad area can be regarded as free memory.
The preferred mode of operation is to use the stack as much as possible instead of local variables or a heap.
1 p. 286 (about a particular edition of Forth, MMSFORTH) in chapter "FORTH's Memory, Dictionary, and Vocabularies", Forth: A text and a reference. Mahlon G. Kelly and Nicholas Spies. ISBN 0-13-326349-5 / 0-13-326331-2 (pbk.). 1986 by Prentice-Hall.
The fundamental question may not have been answered in a way that a new Forth user would require so I will take a run at it.
Memory in Forth can be very target dependent so I will limit the description to the simplest model, that being a flat memory space, where code and data live together happily. (as opposed to segmented memory models, or FLASH memory for code and RAM for data or other more complicated models)
The Dictionary typically starts at the bottom of memory and is allocated upwards by the Forth system. The two stacks, in a simple system would exist in high memory and typically have two CPU registers pointing to them. (Very system dependent)
At the most fundamental level, memory is allocated simply by changing the value of the dictionary pointer variable. (sometimes called DP)
The programmer does not typically access this variable directly but rather uses some higher level words to control it.
As mentioned the Forth word HERE returns the next available address in the dictionary space. What was not mentioned was that HERE is defined by fetching the value of the variable DP. (system dependency here but useful for a description)
In Forth HERE might look like this:
: HERE ( -- addr) DP # ;
That's it.
To allocate some memory we need to move HERE upwards and we do that with the word ALLOT.
The Forth definition for ALLOT simply takes a number from the parameter stack and adds it to the value in DP. So it is nothing more than:
: ALLOT ( n --) DP +! ; \ '+!' adds n to the contents variable DP
ALLOT is used by the FORTH system when we create a new definition so that what we created is safely inside 'ALLOTed' memory.
Something that is not immediately obvious is the that ALLOT can take a negative number so it is possible to move the dictionary pointer up or down. So you could allocate some memory and return it like this:
HEX 100 ALLOT
And free it up like this:
HEX -100 ALLOT
All this to say that this is the simplest form of memory management in a Forth system. An example of how this is used can be seen in the definition of the word BUFFER:
: BUFFER: ( n --) CREATE ALLOT ;
BUFFER: "creates" a new name in the dictionary (create uses allot to make space for the name by the way) then ALLOTs n bytes of memory right after the name and any associated housekeeping bytes your Forth system might use
So now to allocate a block of named memory we just type:
MARKER FOO \ mark where the memory ends right now
HEX 2000 BUFFER: IN_BUFFER
Now we have an 8K byte buffer called IN_BUFFER. If wanted to reclaim that space in Standard Forth we could type FOO and everything allocated in the Dictionary after FOO would be removed from the Forth system.
But if you want temporary memory space, EVERYTHING above HERE is free to use!
So you can simply point to an address and use it if you want to like this
: MYMEMORY here 200 + ; \ MYMEMORY points to un-allocated memory above HERE
\ MYMEMORY moves with HERE. be aware.
MYMEMORY HEX 1000 ERASE \ fill it with 2K bytes of zero
Forth has typically been used for high performance embedded applications where dynamic memory allocation can cause un-reliable code so static allocation using ALLOT was preferred. However bigger systems have a heap and use ALLOCATE, FREE and RESIZE much like we use malloc etc. in C.
BF
Peter Mortensen laid it out very well. I'll add a few notes that might help a C programmer some.
The stack is closest to what C terms "auto" variables, and what are commonly called local variables. You can give your stack values names in some forths, but most programmers try to write their code so that naming the values is unnecessary.
The dictionary can best be viewed as "static data" from a C programming perspective. You can reserve ranges of addresses in the dictionary, but in general you will use ALLOT and related words to create static data structures and pools which do not change size after allocation. If you want to implement a linked list that can grow in real time, you might ALLOT enough space for the link cells you will need, and write words to maintain a free list of cells you can draw from. There are naturally implementations of this sort of thing available, and writing your own is a good way to hone pointer management skills.
Heap allocation is available in many modern Forths, and the standard defines ALLOCATE, FREE and RESIZE words that work in a way analogous to malloc(), free(), and realloc() in C. Where the bytes are allocated from will vary from system to system. Check your documentation. It's generally a good idea to store the address in a variable or some other more permanent structure than the stack so that you don't inadvertently lose the pointer before you can free it.
As a side note, these words (along with the file i/o words) return a status on the stack that is non-zero if an error occurred. This convention fits nicely with the exception handling mechanism, and allows you to write code like:
variable PTR
1024 allocate throw PTR !
\ do some stuff with PTR
PTR # free throw
0 PTR !
Or for a more complex if somewhat artificial example of allocate/free:
\ A simple 2-cell linked list implementation using allocate and free
: >link ( a -- a ) ;
: >data ( a -- a ) cell + ;
: newcons ( a -- a ) \ make a cons cell that links to the input
2 cells allocate throw tuck >link ! ;
: linkcons ( a -- a ) \ make a cons cell that gets linked by the input
0 newcons dup rot >link ! ;
: makelist ( n -- a ) \ returns the head of a list of the numbers from 0..n
0 newcons dup >r
over 0 ?do
i over >data ! linkcons ( a -- a )
loop >data ! r> ;
: walklist ( a -- )
begin dup >data ? >link # dup 0= until drop ;
: freelist ( a -- )
begin dup >link # swap free throw dup 0= until drop ;
: unittest 10 makelist dup walklist freelist ;
Some Forth implementations support local variables on the return stack frame and allocating memory blocks. For example in SP-Forth:
lib/ext/locals.f
lib/ext/uppercase.f
100 CONSTANT /buf
: test ( c-addr u -- ) { \ len [ /buf 1 CHARS + ] buf }
buf SWAP /buf UMIN DUP TO len CMOVE
buf len UPPERCASE
0 buf len + C! \ just for illustration
buf len TYPE
;
S" abc" test \ --> "ABC"
With Forth you enter a different world.
In a typical Forth like ciforth on linux (and assuming 64 bits) you can configure your Forth to have a linear memory space that is as large as your swap space (e.g. 128 Gbyte). That is yours to fill in with arrays, linked lists, pictures whatever. You do this interactively, typically by declaring variable and including files. There are no restrictions. Forth only provides you with a HERE pointer to help you keep track of memory you have used up. Even that you can ignore, and there is even a word in the 1994 standard that provides scratch space that floats in the free memory (PAD).
Is there something like malloc() free() ? Not necessarily. In a small kernel of a couple of dozen kilobytes,no. But you can just include a file with an ALLOCATE / FREE and set aside a couple of Gbyte to use for dynamic memory.
As an example I'm currently working with tiff files. A typical 140 Mbyte picture takes a small chunk out of the dictionary advancing HERE.
Rows of pixels are transformed, decompressed etc. For that I use dynamic memory, so I ALLOCATE space for the decompression result of a row. I've to manually FREE them again when the results have been used up for another transformation. It feels totally different from c. There is more control and more danger.
Your question about scopes etc. In Forth if you know the address, you can access the data structure. Even if you jotted F7FFA1003 on a piece of paper. Trying to make programs safer by separate name spaces is not prominent in Forth style. So called wordlist (see also VOCABULARY) provide facilities in that direction.
There's a little elephant hiding in a big FORTH memory management room, and I haven't seen too many people mention it.
The canonical FORTH has, at the very least, a non-addressable parameter stack. This is the case in all FORTH hardware implementations I'm aware of (usually originating with Chuck Moore) that have a hardware parameter stack: it's not mapped into the addressable memory space.
What does "non-addressable" mean? It means: you can't have pointers to the parameter stack, i.e. there are no means to get addresses of things on that stack. The stack is a "black box" that you can only access via the stack API (opcodes if it's a hardware stack), without bypassing it - and only that API will modify its contents.
This implies no aliasing between parameter stack and memory accesses using pointers - via # and ! and the like. This enables efficient code generation with small effort, and indeed it makes decent generated code in FORTH systems orders of magnitude easier to obtain than with C and C++.
This of course breaks down when pointers can be obtained to the parameter stack. A well designed system would probably have guarded API for such access, since within the guards the code generator has to spill everything from registers to stack - in absence of full data flow analysis, that is.
DFA and other "expensive" optimization techniques are not of course impossible in FORTH, it's just that they are a bit larger in scope than many a practical FORTH system. They can be done very cleanly in spite of that (I'm using CFA, DFA and SSA optimizations in an in-house FORTH implementation, and the whole thing has less source code, comments included, than the utility classes in LLVM... - classes that are used all over the place, but that don't actually do anything related to compiling or code analysis).
A practical FORTH system can also place aliasing limitations on the return stack contents, namely that the return addresses themselves don't alias. That way control flow can be analyzed optimistically, only taking into account explicit stack accesses via R#, >R and R>, while letting you place addressable local variables on that stack - that's typically done when a variable is larger than a cell or two, or would be awkward to keep around on the parameter stack.
In C and C++, aliasing between automatic "local" variables and pointers is a big problem, because only large compilers with big optimizers can afford to prove lack of aliasing and forgo register reloads/spills when intervening pointer dereferences take place. Small compilers, to remain compliant and not generate broken code, have to pessimize and assume that accesses via char* alias everything, and accesses via Type* alias that type and others "like it" (e.g. derived types in C++). That char* aliases all things in C is a prime example of where you pay a big price for a feature you didn't usually intend to use.
Usually, forcing an unsigned char type for characters, and re-writing the string API using this type, lets you not use char* all over the place and lets the compiler generate much better code. Compilers of course add lots of analysis passes to minimize the fallout from this design fiasco... And all it'd take to fix in C is having a byte type that aliases every other type, and is compatible with arbitrary pointers, and has the size of the smallest addressable unit of memory. The reuse of void in void* to mean "pointer to anything" was, in hindsight, a mistake, since returning void means returning nothing, whereas pointing to void absolutely does not mean "pointing to nothing".
My idea is published at https://sites.google.com/a/wisc.edu/memorymanagement
I'm hoping to put forth code on github soon.
If you have an array (or several) with each array having a certain number of items of a certain size, you can pair a single-purpose stack to each array. The stack is initialized with the address of each array item. To allocate an array item, pop an address off the stack. To deallocate an array item, push its address onto the stack.

are array initialization operations cached as well

If you are not reading a value but assigning a value
for example
int array[] = new int[5];
for(int i =0; i < array.length(); i++){
array[i] = 2;
}
Still does the array come to the cache? Can't the cpu bring the array elements one by one to its registers and do the assignment and after that write the updated value to the main memory, bypasing the cache because its not necessary in this case ?
The answer depends on the cache protocol I answered assuming Write Back Write Allocate.
The array will still come to the cache and it will make a difference. When a cache block is retrieved from it's more than just a single memory location (the actual size depends on the design of the cache). So since arrays are stored in order in memory pulling in array[0] will pulling the rest of the block which will include (at least some of) array[1] array[2] array[3] and array[4]. This means the following calls will not have to access main memory.
Also after all this is done the values will NOT be written to memory immediately (under write back) instead the CPU will keep using the cache as the memory for reads/writes until that cache block is replaced from the cache at which point the values will be written to main memory.
Overall this is preferable to going to memory every time because the cache is much faster and the chances are the user is going to use the memory he just set relatively soon.
If the protocol is Write Through No Allocate then it won't bring the block into memory and it will right straight through to the main memory.

Allocating more memory to an existing Global memory array

is it possible to add memory to a previously allocated array in global memory?
what i need to do is this:
//cudamalloc memory for d_A
int n=0;int N=100;
do
{
Kernel<<< , >>> (d_A,n++);
//add N memory to d_A
while(n!=5)}
does doing another cudamalloc removes the values of the previously allocated array? in my case the values of the previous allocated array should be kept...
First, cudaMalloc behaves like malloc, not realloc. This means that cudaMalloc will allocate totally new device memory at a new location. There is no realloc function in the cuda API.
Secondly, as a workaround, you can just use cudaMalloc again to allocate more more memory. Remember to free the device pointer with cudaFree before you assign a new address to d_a. The following code is functionally what you want.
int n=0;int N=100;
//set the initial memory size
size = <something>;
do
{
//allocate just enough memory
cudaMalloc((void**) &d_A, size);
Kernel<<< ... >>> (d_A,n++);
//free memory allocated for d_A
cudaFree(d_A);
//increase the memory size
size+=N;
while(n!=5)}
Thirdly, cudaMalloc can be an expensive operation, and I expect the above code will be rather slow. I think you should consider why you want to grow the array. Can you allocate memory for d_A one time with enough memory for the largest use case? There is likely no reason to allocate only 100 bytes if you know you need 1,000 bytes later on!
//calculate the max memory requirement
MAX_SIZE = <something>;
//allocate only once
cudaMalloc((void**) &d_A, MAX_SIZE);
//use for loops when they are appropriate
for(n=0; n<5; n++)
{
Kernel<<< ... >>> (d_A,n);
}
Your psuedocode does not "add memory to a previously allocated array" at all. The standard C way of increasing the size of an existing allocation is via the realloc() function, and there is no CUDA equivalent of realloc() at the time of writing.
When you do
cudaMalloc(d_A....)
// something
cudaMalloc(d_A....)
all you are doing is creating a new memory allocation and assigning it to d_A. The previous memory allocation still exists, but now you have lost the pointer value of the previous memory and have no way of accessing it. Based on this and your previous question on almost the same subject, might I suggest you spend a bit of time revising memory and pointer concepts in C before you try CUDA, because unless you have a very clear understanding of these fundamentals, you will find the distributed memory nature of CUDA to be very confusing,
I'm not sure what complications cuda adds to the mix(?) but in c you can't add memory to an already allocated array.
If you want to grow a malloc'd array, you need to malloc a new array of the size you need and copy the contents from the existing array.
If you're doing this often then it's probably worth mallocing more than you need each time to avoid costly (in terms of processing time) re-allocation operations.

Resources