A D3D11_USAGE_STAGING Resource cannot be bound to any parts of the graphics pipeline - directx-11

I'm trying to Create RWStrutruedBuffer with a D3D11_USAGE_STAGING for cpu access
.
error: D3D11_USAGE_STAGING Resource cannot be bound to any parts of the graphics pipeline.
How can get the RWStrutruedBuffer from GPU to CPU? I need the result.thanks for every.
Should I Copy a Resource with a D3D11_USAGE_DEFAULT flag to a Resource with D3D11_USAGE_STAGING ,then read it form CPU?

I found the answer just like step 2,I need to Copy resource to other resource that was created with a D3D11_USAGE_STAGING flag,then use map with D3D11_MAP_READ,finally read it.

Related

Equivalent to ZwQueryVirtualMemory that works on system memory?

ZwQueryVirtualMemory reports on virtual memory in the address space of a process. I would like to do the same thing, but for paged memory in system space. Is there an equivalent function that deals with system space instead of process space?
You could use ZwQuerySystemInformation with SystemModuleInformation to get all running drivers. Then you can find the entry you want and get the base and size of the driver. You could if you want to do it properly only get the base of a driver with using the same method of above or using the PsLoadedModuleList to get the base of the targeted driver and then just walk the sections manually with the headers.
Also a tip, if you are going to copy it to dump it, use MmCopyMemory.

When Resources of a PE file are loaded

When using a resource included in a PE file (for example a binary resource) in C++ . we have to
first call
1 )FindResource and then
2 )LoadResource
to access the resource .
Being accurate about the function name "LoadResource" i wonder if the "Windows Loader" does load all resource of an application in memory just when loading other parts (like code or data section) or they are delay loaded only when we need them ?
If so can we unload these resources after we have used them in order to free allocated memory?
These functions are old, they date back to Windows versions that did not yet support virtual memory. Back in the olden days they would actually physically load a resource into RAM.
Those days are long gone, the OS loader creates a memory-mapped file to map the executable file into memory. And anything from the file (code and resources) are only mapped into RAM when the program dereferences a pointer. You only pay for what you use.
So LoadResource() does very little, it simply returns a pointer, disguised as a HGLOBAL handle. LockResource() does nothing interesting, it simply casts the HGLOBAL back to a pointer. When you actually start using it then you'll trip a page fault and the kernel reads the file, loading it into RAM. UnlockResource() and FreeResource() do nothing. If the OS needs RAM for another process then it can unmap the RAM for the resource. Nothing needs to be preserved since the memory is backed by the file, the page can simply be discarded. Paged back in when necessary if you use the resource again.

Unknown symbol flush_cache_range in linux device driver

I am just writing my very first linux device driver, and I have ran into a problem. I want to prevent one memory region from being cached, so I have been trying to use flush_cache_range() and flush_tlb_range() to flush the cache for this memory region. Everything compiles well, but when I try to load the kernel module I get the following errors:
Unknown symbol flush_cache_range (err 0)
Unknown symbol flush_tlb_range (err 0)
I find this very strange. Shouldn't they be defined in kernel?
I know that alternatively I could also use dma_alloc_coherent() to allocate a non-cached memory region. But I don't have a device structure and passing NULL for this parameter didn't cause any errors, but I also couldn't see any of the data that was supposed to be there.
Some information about my system: I'm trying to get this running on a ARM microcontroller with an integrated FPGA (the Xilinx Zynq). The FPGA copies some data to a memory location specified by the CPU. Now I want to access this memory without getting old data from the caches.
Any help is very appreciated.
You cannot use functions such as flush_cache_range() because they are not intended to be used by modules.
To allocate memory that can be accessed by a DMA device, you must use dma_alloc_coherent().
This requires a valid device structure so that it can do proper mapping between memory addresses and bus addresses.
If your device is not on a bus that is handled by an existing framework (such as PCI), you have to create a platform device.
A few notes:
1- flush_cache_range doesn't "prevent one memory region from being cached" .. It just simply flush (clean + invalidate) the caches. Any future writes/reads to this memory region through the same virtual range will go through the cache again.
2- If the FPGA is writing to memory and then the CPU are going to read from this memory, probably flushing the cache isn't the correct thing to do any way. Usually what you need to do is to invalidate the memory region and then tell the FPGA to write.
3- Please take a look at "${kernel-src}/Documentation/DMA-API.txt" in the kernel sources. It has plenty of information about how you can safely ( cache maintenance + phys_to_dma translation ) use a specific region of memory for DMA.

How can I bind a buffer resource that resides on the GPU to the input assembler (IA)?

I use compute shaders to compute a triangle list and to store it in a RWStructuredBuffer. For testing I read this buffer and pass it to the IA via context.InputAssembler.SetVertexBuffers (…). This approach works, but is valid only for testing the data for correctness.
Now I want to bind the (already existing) buffer to the IA stage using a resource view (aka without passing a pointer to the vertex buffer).
I am reading some good books (Frank D. Luna, Jason Zink), but they never mention this case.
===============
EDIT:
The syntax I am using here in imposed by the SharpDX wrapper.
I can bind the buffer to the vertex shader via context.VertexShader.SetShaderResource(...), bindig a ResoureceView. In the VS I use SV_VertexID to access the buffer. So I HAVE a working solution for moment, but there might be cases in the future where I must bind the buffer to the input assembler.
Simply put, you can't bind a structured buffer to the IA stage, at least directly, runtime will not allow this.
If you put ResourceOptionFlags.BufferStructured as OptionFlags, you are not allowed to use : VertexBuffer/IndexBuffer/StreamOutput/ConstantBuffer/RenderTarget/Depth as bind flags, Resource creation will fail.
One option, which costs you a GPU copy, is to create a second buffer with VertexBuffer BindFlags, and Default usage (same size as your structured buffer).
Once you are done processing your structuredbuffer, call:
DeviceContext.CopyResource
And you'll have a standard vertex buffer ready to use.

How to tell which module has requested memory with request_mem_region?

I'm writing a kernel driver which needs to access memory-mapped IO.
My call to request_mem_region is failing, indicating that another module (either loaded or built-in) has requested the memory in question.
How can I determine which driver has done this?
Seeing as a string identifier is passed to the request_mem_region function, I assume this is possible.
/proc/iomem is a file that shows the current map of the system's memory for each physical device.

Resources