I'm trying to use the VK_EXT_external_memory_host extension https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VK_EXT_external_memory_host.html. I'm not sure what the difference is between vk::ExternalMemoryHandleTypeFlagBits::eHostAllocationEXT and eHostMappedForeignMemoryEXT but I've been failing to get either to work. (I'm using VulkanHpp).
void* data_ptr = getTorchDataPtr();
uint32_t MEMORY_TYPE_INDEX;
auto EXTERNAL_MEMORY_TYPE = vk::ExternalMemoryHandleTypeFlagBits::eHostAllocationEXT;
// or vk::ExternalMemoryHandleTypeFlagBits::eHostMappedForeignMemoryEXT;
vk::MemoryAllocateInfo memoryAllocateInfo(SIZE_BYTES, MEMORY_TYPE_INDEX);
vk::ImportMemoryHostPointerInfoEXT importMemoryHostPointerInfoEXT(
MEMORY_FLAG,
data_ptr);
memoryAllocateInfo.pNext = &importMemoryHostPointerInfoEXT;
vk::raii::DeviceMemory deviceMemory( device, memoryAllocateInfo );
I'm getting Result::eErrorOutOfDeviceMemory when the constructor of DeviceMemory calls vkAllocateMemory if EXTERNAL_MEMORY_TYPE = eHostAllocationEXT and zeros in the memory if EXTERNAL_MEMORY_TYPE = eHostMappedForeignMemoryEXT (I've checked the py/libtorch tensor I'm importing is non-zero, and that my code successfully copies and readbacks a different buffer).
All values of MEMORY_TYPE_INDEX produce the same behaviour (except when MEMORY_TYPE_INDEX overflows).
The set bits of the bitmask returned by getMemoryHostPointerPropertiesEXT is suppose to give the valid values for MEMORY_TYPE_INDEX.
auto pointerProperties = device.getMemoryHostPointerPropertiesEXT(
EXTERNAL_MEMORY_TYPE,
data_ptr);
std::cout << "memoryTypeBits " << std::bitset<32>(pointerProperties.memoryTypeBits) << std::endl;
}
But if EXTERNAL_MEMORY_TYPE = eHostMappedForeignMemoryEXT then vkGetMemoryHostPointerPropertiesEXT returns Result::eErrorInitializationFailed, and if EXTERNAL_MEMORY_TYPE = eHostAllocationEXT, then the 8th and 9th bits are set. But this is the same regardless of whether data_ptr is a cuda pointer 0x7ffecf400000 or a cpu pointer 0x2be7c80 so I'm feeling something has gone wrong.
I'm also unable to get the extension VK_KHR_external_memory_capabilities which is required by VK_KHR_external_memory which is a requirement of the extension we are using VK_EXT_external_memory_host. I'm using vulkan version 1.2.162.0.
The eErrorOutOfDeviceMemory is strange as we are not supposed to be allocating any memory, I'd be glad if someone could speculate about this.
I believe that host memory is cpu memory, thus:
vk::ExternalMemoryHandleTypeFlagBits::eHostAllocationEXT wont work because the pointer is to device memory (gpu).
vk::ExternalMemoryHandleTypeFlagBits::eHostMappedForeignMemoryEXT wont work because the memory is not mapped by the host (cpu).
Is there anyway to import local device memory into vulkan? does it have to be host mapped?
Probably not https://stackoverflow.com/a/54801938/11998382.
I think the best option, for me, is to map some vulkan memory, and copy the pytorch cpu tensor across. The same data would be uploaded to the gpu twice but this doesn't really matter I suppose.
Related
I am currently trying my first dynamic parallelism code in CUDA. It is pretty simple. In the parent kernel I am doing something like this:
int aPayloads[32];
// Compute aPayloads start values here
int* aGlobalPayloads = nullptr;
cudaMalloc(&aGlobalPayloads, (sizeof(int) *32));
cudaMemcpyAsync(aGlobalPayloads, aPayloads, (sizeof(int)*32), cudaMemcpyDeviceToDevice));
mykernel<<<1, 1>>>(aGlobalPayloads); // Modifies data in aGlobalPayloads
cudaDeviceSynchronize();
// Access results in payload array here
Assuming that I do things right so far, what is the fastest way to access the results in aGlobalPayloads after kernel execution? (I tried cudaMemcpy() to copy aGlobalPayloads back to aPayloads but cudaMemcpy() is not allowed in device code).
You can directly access the data in aGlobalPayloads from your parent kernel code, without any copying:
mykernel<<<1, 1>>>(aGlobalPayloads); // Modifies data in aGlobalPayloads
cudaDeviceSynchronize();
int myval = aGlobalPayloads[0];
I'd encourage careful error checking (Read the whole accepted answer here). You do it in device code the same way as in host code. The programming guide states: "May not pass in local or shared memory pointers". Your usage of aPayloads is a local memory pointer.
If for some reason you want that data to be explicitly put back in your local array, you can use in-kernel memcpy for that:
memcpy(aPayloads, aGlobalPayloads, sizeof(int)*32);
int myval = aPayloads[0]; // retrieves the same value
(that is also how I would fix the issue I mention in item 2 - use in-kernel memcpy)
I was wondering if there is an existing system call/API for accessing getting the physical address of the virtual address?
If there is none then some direction on how to get that working ?
Also, how to get the physical address of MMIO which is non-pageable physical memory ?
The answer lies in IOMemoryDescriptor and IODMACommand objects.
If the memory in question is kernel-allocated, it should be allocated by creating an IOBufferMemoryDescriptor in the first place. If that's not possible, or if it's a buffer allocated in user space, you can wrap the relevant pointer using IOMemoryDescriptor::withAddressRange(address, length, options, task) or one of the other factory functions. In the case of withAddressRange, the address passed in must be virtual, in the address space of task.
You can directly grab physical address ranges from an IOMemoryDescriptor by calling the getPhysicalSegment() function (only valid between prepare()…complete() calls). However, normally you would do this for creating scatter-gather lists (DMA), and for this purpose Apple strongly recommends the IODMACommand. You can create these using IODMACommand::withSpecification(). Then use the genIOVMSegments() function to generate the scatter-gather list.
Modern Macs, and also some old PPC G5s contain an IOMMU (Intel calls this VT-d), so the system memory addresses you pass to PCI/Thunderbolt devices are not in fact physical, but IO-Mapped. IODMACommand will do this for you, as long as you use the "system mapper" (the default) and set mappingOptions to kMapped. If you're preparing addresses for the CPU, not a device, you will want to turn off mapping - use kIOMemoryMapperNone in your IOMemoryDescriptor options. Depending on what exactly you're trying to do, you probably don't need IODMACommand in this case either.
Note: it's often wise to pool and reuse your IODMACommand objects, rather than freeing and reallocating them.
Regarding MMIO, I assume you mean PCI BARs and similar - for IOPCIDevice, you can grab an IOMemoryDescriptor representing the memory-mapped device range using getDeviceMemoryWithRegister() and similar functions.
Example:
If all you want are pure CPU-space physical addresses for a given virtual memory range in some task, you can do something like this (untested as a complete kext that uses it would be rather large):
// INPUTS:
mach_vm_address_t virtual_range_start = …; // start address of virtual memory
mach_vm_size_t virtual_range_size_bytes = …; // number of bytes in range
task_t task = …; // Task object of process in which the virtual memory address is mapped
IOOptionBits direction = kIODirectionInOut; // whether the memory will be written or read, or both during the operation
IOOptionBits options =
kIOMemoryMapperNone // we want raw physical addresses, not IO-mapped
| direction;
// Process for getting physical addresses:
IOMemoryDescriptor* md = IOMemoryDescriptor::withAddressRange(
virtual_range_start, virtual_range_size_bytes, direction, task);
// TODO: check for md == nullptr
// Wire down virtual range to specific physical pages
IOReturn result = md->prepare(direction);
// TODO: do error handling
IOByteCount offset = 0;
while (offset < virtual_range_size_bytes)
{
IOByteCount segment_len = 0;
addr64_t phys_addr = md->getPhysicalSegment(offset, &len, kIOMemoryMapperNone);
// TODO: do something with physical range of segment_len bytes at address phys_addr here
offset += segment_len;
}
/* Unwire. Call this only once you're done with the physical ranges
* as the pager can change the physical-virtual mapping outside of
* prepare…complete blocks. */
md->complete(direction);
md->release();
As explained above, this is not suitable for generating DMA scatter-gather lists for device I/O. Note also this code is only valid for 64-bit kernels. You'll need to be careful if you still need to support ancient 32-bit kernels (OS X 10.7 and earlier) because virtual and physical addresses can still be 64-bit (64-bit user processes and PAE, respectively), but not all memory descriptor functions are set up for that. There are 64-bit-safe variants available to be used for 32-bit kexts.
Memory in the Linux kernel is usually unswappable (Do Kernel pages get swapped out?). However, sometimes it is useful to allow memory to be swapped out. Is it possible to explicitly allocate swappable memory inside the Linux kernel? One method I thought of was to create a user space process and use its memory. Is there anything better?
You can create a file in the internal shm shared memory filesystem.
const char *name = "example";
loff_t size = PAGE_SIZE;
unsigned long flags = 0;
struct file *filp = shmem_file_setup(name, size, flags);
/* assert(!IS_ERR(filp)); */
The file isn't actually linked, so the name isn't visible. The flags may include VM_NORESERVE to skip accounting up-front, instead accounting as pages are allocated. Now you have a shmem file. You can map a page like so:
struct address_space *mapping = filp->f_mapping;
pgoff_t index = 0;
struct page *p = shmem_read_mapping_page(mapping, index);
/* assert(!IS_ERR(filp)); */
void *data = page_to_virt(p);
memset(data, 0, PAGE_SIZE);
There is also shmem_read_mapping_page_gfp(..., gfp_t) to specify how the page is allocated. Don't forget to put the page back when you're done with it.
put_page(p);
Ditto with the file.
fput(filp);
Answer to your question is a simple No, or Yes with a complex modification to kernel source.
First, to enable swapping out, you have to ask yourself what is happening when kswapd is swapping out. Essentially it will walk through all the processes and make a decision whether its memory can be swapped out or not. And all these memory have the hardware mode of ring 3. So SMAP essentially forbid it from being read as data or executed as program in the kernel (ring 0):
https://en.wikipedia.org/wiki/Supervisor_Mode_Access_Prevention
And check your distros "CONFIG_X86_SMAP", for mine Ubuntu it is default to "y" which is the case for past few years.
But if you keep your memory as a kernel address (ring 0), then you may need to consider changing the kswapd operation to trigger swapout of kernel addresses. Whick kernel addresses to walk first? And what if the address is part of the kswapd's kernel operation? The complexities involved is huge.
And next is to consider the swap in operation: When the memory read is attempted and it's "not present" bit is enabled, then hardware exception will trigger linux kernel memory fault handler (which is __do_page_fault()).
And looking into __do_page_fault:
https://elixir.bootlin.com/linux/latest/source/arch/x86/mm/fault.c#L1477
and there after how it handler the kernel addresses (do_kern_address_fault()):
https://elixir.bootlin.com/linux/latest/source/arch/x86/mm/fault.c#L1174
which essentially is just reporting as error for possible scenario. If you want to enable kernel address pagefaulting, then this path has to be modified.
And note too that the SMAP check (inside smap_violation) is done in the user address pagefaulting (do_usr_addr_fault()).
I have to use a Microchip PIC for a new project (needed high pin count on a TQFP60 package with 5V operation).
I have a huge problem, I might miss something (sorry for that in advance).
IDE: MPLAB X 3.51
Compiler: XC8 1.41
The issue is that if I initialize an object to anything other than 0, it will not be initialized, and always be zero when I reach the main();
In simulator it works, and the object value is the proper one.
Simple example:
#include <xc.h>
static int x= 0x78;
void main(void) {
while(x){
x++;
}
return;
}
In simulator the x is 0x78 and the while(x) is true.
BUT when I load the code to the PIC18F67K40 using PICKIT3, the x is 0.
This happening even if I do a simple sprintf, and it does nothing as the formatting text string (char array) is full of zeros.
sprintf(buf,"Number is %u",x")
I can not initialize any object apart to be zero.
What is going on? Any help appreciated!
Found the problem, The chip has an errata issues, and I got the one which is effected, strange, Farnell sells it. More strange that the compiler is not prepared for that, does not even give a warning to say to be careful!
Errata note:
Module: PIC18 Core
3.1 TBLRD requires NVMREG value to point to
appropriate memory
The affected silicon revisions of the PIC18FXXK40
devices improperly require the NVMREG<1:0>
bits in the NVMCON register to be set for TBLRD
access of the various memory regions. The issue
is most apparent in compiled C programs when the
user defines a const type and the compiler uses
TBLRD instructions to retrieve the data from
program Flash memory (PFM). The issue is also
apparent when the user defines an array in RAM
for which the complier creates start-up code,
executed before main(), that uses TBLRD
instructions to initialize RAM from PFM.
I have very basic question which i fail to understand after going through documents. I am facing this issue while executing one of my project as the output i get is totally corrupted and i believe problem is either with memory allocation or with thread sync.
ok the question is:
Can every thread creates separate copy of all the variables and pointers passed to the kernal function ? or it just creates copy of variable but the pointers we pass that memory is shared amoung all threads.
e.g.
int main()
{
const int DC4_SIZE = 3;
const int DC4_BYTES = DC4_SIZE * sizeof(float);
float * dDC4_in;
float * dDC4_out;
float hDC4_out[DC4_SIZE];
float hDC4_out[DC4_SIZE];
gpuErrchk(cudaMalloc((void**) &dDC4_in, DC4_BYTES));
gpuErrchk(cudaMalloc((void**) &dDC4_out, DC4_BYTES));
// dc4 initialization function on host which allocates some values to DC4[] array
gpuErrchk(cudaMemcpy(dDC4_in, hDC4_in, DC4_BYTES, cudaMemcpyHostToDevice));
mykernel<<<10,128>>>(VolDepth,dDC4_in);
cudaMemcpy(hDC4_out, dDC4_out, DC4_BYTES, cudaMemcpyDeviceToHost);
}
__global__ void mykernel(float VolDepth,float * dDC4_in,float * dDC4_out)
{
for(int index =0 to end)
dDC4_out[index]=dDC4_in[index] * VolDepth;
}
so i am passing dDC4_in and dDC4_out pointers to GPU with dDC4_in initialized with some values and computing dDC4_out and copying back to host,
so does my all 1280 threads will have separate dDC4_in/out copies or they all will work on same copy on GPU overwriting the values of other threads?
global memory is shared by all threads in a grid. The parameters you pass to your kernel (that you've allocated with cudaMalloc) are in the global memory space.
Threads do have their own memory (local memory), but in your example dDC4_in and dDC4_out are shared by all of your threads.
As a general run-down (taken from the CUDA Best Practices documentation):
On the DRAM side:
Local memory (and registers) is per-thread, shared memory is per-block, and global, constant, and texture are per-grid.
In addition, global/constant/texture memory can be read and modified on the host, while local and shared memory are only around for the duration of your kernel. That is, if you have some important information in your local or shared memory and your kernel finishes, that memory is reclaimed and your information lost. Also, this means that the only way to get data into your kernel from the host is via global/constant/texture memory.
Anyways, in your case it's a bit hard to recommend how to fix your code, because you don't take threads into account at all. Not only that, in the code you posted, you're only passing 2 arguments to your kernel (which takes 3 parameters), so it's no surprise your results are somewhat lacking. Even if your code were valid, you would have every thread looping from 0 to end and writing the to the same location in memory (which would be serialized, but you wouldn't know which write would be the last one to go through). In addition to that race condition, you have every thread doing the same computation; each of your 1280 threads will execute that for loop and perform the same steps. You have to decide on a mapping of threads to data elements, divide up the work in your kernel based on your thread to element mapping, and perform your computation based on that.
e.g. if you have a 1 thread : 1 element mapping,
__global__ void mykernel(float VolDepth,float * dDC4_in,float * dDC4_out)
{
int index = threadIdx.x + blockIdx.x*blockDim.x;
dDC4_out[index]=dDC4_in[index] * VolDepth;
}
of course this would also necessitate changing your kernel launch configuration to have the correct number of threads, and if the threads and elements aren't exact multiples, you'll want some added bounds checking in your kernel.