Using XE6
I am trying to figure out how to determine the physical size of the strokethickness in Firemonkey. For example what is I need a strokeThickness of 2mm? How do I set the physical thickness?
Related
So, Vulkan introduced subpasses and opengl implelemts similar behaviour with ARM_framebuffer_fetch
In the past, I have used framebuffer_fetch successfully for tonemapping post-effect shaders.
Back then the limitation was that one could only read the contents of the framebuffer at the location of the currently rendered fragment.
Now, what I wonder is whether there is any way by now in Vulkan (or even OpenGL ES) to read from multiple locations (for example to implement a blur kernel) without having a tiled hardware to store/load to RAM.
In theory I guess it should be possible, the first pass wpuld just need to render slightly larger than the blur subpass, based on kernel size (so for example if kernel size was 4 pixels then the tile resolved would need to be 4 pixels smaller than the in-tile buffer sizes) and some pixels would have to be rendered redundantly (on the overlaps of tiles).
Now, is there a way to do that?
I seem to recall having seen some Vulkan instruction related to subpasses that would allow to define the support size (which sounded like what I’m looking for now) but I can’t recall where I saw that.
So my questions:
With Vulkan on a mobile tiled renderer architecture, is it possible to forward-render some geometry and the render a full-screen blur over it, all within a single in-tile pass (without the hardware having to store the result of the intermediate pass to ram first and then load the texture from ram when bluring)? If so, how?
If the answer to 1 is yes, can it also be done in OpenGL ES?
Short answer, no. Vulkan subpasses still have the 1:1 fragment-to-pixel association requirements.
I know that I can collage display by Intel® Graphics Control Panel and Intel® Graphics Command Center.
However, can I collage display programmatically? Whatever the solution is Windows API, Intel API, Powershell, Command Line or any document.
Waiting for help, thanks!
I think you can do that with ChangeDisplaySettingsEx WinAPI. Set DM_POSITION bit in dmFields, and dmPosition to the value.
To find monitor device names, and current rectangles, EnumDisplayMonitors and GetMonitorInfo.
Couple more notes.
The primary display has top left position [ 0, 0 ] and you can’t change that. However, coordinates are signed integers, so you can set some other display position to negative X.
Beware of DPI scaling. The units you’ll be getting in the rectangles, and setting in the offsets, depend on DPI awareness manifest of your program.
I'm building an image viewer based on Metal and currently experimenting with loading some rather large images (70,000 x 24,000 pixels). The image is being loaded into multiple MTLTextures, each of type MTLTextureType2DArray.
Metal appears to be creating the textures and allocating the required memory as Xcode reports my application using over 9 GB of memory. On my 13" MBP with 16GB of memory, I clearly don't have 9 GB of GPU memory for all those textures so I assume Metal is using system memory for the allocations?
However, when it comes time to perform a render pass, Metal immediately throws an Insufficient Memory error when I ask for the current render pass descriptor.
How do I manage texture allocations so that I don't exceed any Metal or system limits? Obviously at some image size I'll need to switch from loading the entire image into memory to a more tile-based approach but that's going to vary from one machine to another depending on GPU and RAM.
I would have thought that allocating new textures would just start failing at some point, but I appear to be able to allocate enough to cover an image of width 70,000 pixels and height 24,000 pixels (BGRA8), but that same allocation causes a memory error when the render pass occurs.
Edit #1:
I just realized that when using MTLStorageModeManaged for the textures, Metal only needs to update the video memory representation of the texture when I assign the texture to a fragment index during my render pass. If I don't assign all of the textures, then that my Insufficient Memory error goes away because, presumably, there's enough video memory left over for Metal to allocate a drawable.
That suggests I can have as much texture data as I do system memory, but I need to be careful how much of that texture data I access during my render pass to not exhaust available video memory.
Xcode Console Error:
Execution of the command buffer was aborted due to an error during execution. Insufficient Memory (IOAF code 8)
Stack Trace:
Memory Gauge:
Use the following guidelines to determine the appropriate storage mode for a particular texture.
Just a curiosity, since my earlier question was put onto hold and couldn't communicate any further on that question, so my curiosity according to this link is whether the size of the pixel changes in different environments physically such as computer screen, ipda, mobile devices. I was wondering about that and if that does then can pixel be considered relative unit in relative to computer devices? Other curiosity is that what I found out from the video on youtube, which says that the size of pixel changes logically when we change the resolution of the screen, but even after changing the resolutions, I could not find the size of image being changed. Hence, I would like to get your answers whether the size of the pixel stays the same in every devices or they change just logically according to the resolution of the screen and resolution of image.
Consider screens with a native maximum (physical) resolution of 1600x900. If you have a 21" monitor with that resolution, the physical pixel size will be different than a 42" monitor with the same native maximum resolution. Logical resolution is different, but the (logical) pixel grouping will similarly be different with disparate underlying physical display size characteristics.
I'm using OpenGL to speed-up GUI rendering. It works fine, but when user drags the window into a different display (potentially connected to a different GPU), it starts rendering black interior. When the user moves the window back to the original display, it starts working again.
I have this report from Windows XP, I'm unfortunately unable to check on Win 7/8 and Mac OS X right now.
Any ideas what to do about it?
In the current Windows and Linux driver models OpenGL contexts are tied to a certain graphics scanout framebuffer. It's perfectly possible for the scanout framebuffer to span over several connected displays and even GPUs, if the GPU architecture allows for that (for example NVidia SLi and AMD CrossFire).
However what does not work (with current driver architectures) are GPUs of different vendors (e.g. NVidia and Intel) to share a scanout buffer. Or in the case of all NVidia or AMD if the GPUs have not been connected using SLI or CrossFire.
So if the monitors are connected to different graphics cards then this can happen.
This is however only a software design limitation. It's perfectly possible to separate graphics rendering from display scanout. This forms in fact the technical base for Hybrid graphics, where a fast GPU renders into the scanout buffer memory managed by a different GPU (NVidia Optimus for example).
The low hanging fruit to fix that would be to recreate the context when the window passes over to a screen conntected to a different GPU. But this has a problem: If the Window is split among screens on one of the screens it will stay black. Also recreating a context, together with uploading all the data can be a length operation. And often in situations like yours the device on the other screen is incompatible with the feature set of the original context.
A workaround for this is to do all rendering on a off-screen framebuffer object (FBO), which contents you then copy to CPU memory and from there to the target Window using GDI operations. This method however has a huge drawback of involving a full memory roundtrip and increased latency.
The steps to set this up would be:
Identify the screen with the GPU you want to use
Create a hidden window centered on that screen. (i.e. do not WS_VISIBLE as style in CreateWindow and do not call ShowWindow on it).
Create a OpenGL context on this window; it doesn't have to be double buffered PIXELFORMAT, but usually double buffered gives better performance.
Create the target, user visible window; do not bind a OpenGL context to this window.
Setup a Framebuffer Object (FBO) on the OpenGL context
the renderbuffer target of this FBO is to be created to match the client rect size of the target window; when the window gets resized, resize the FBO renderbuffer.
setup 2 renderbuffer object for double buffered operation
Setup a Pixel Buffer Object (PBO) that matches the dimensions of the renderbuffers
when the renderbuffers' size changes, so needs the PBO
With OpenGL render to the FBO, then transfer the pixel contents to the PBO (glBindBuffer, glReadPixels)
Map the PBO to process memory using glMapBuffer and use the SetDIBitsToDevice function to transfer the data from the mapped memory region to the target window device context; then unmap the PBO