I want to resize a screen captured using the Desktop Duplication API in SharpDX. I am using the Screen Capture sample code from the SharpDX Samples repository, relevant portion follows:.
SharpDX.DXGI.Resource screenResource;
OutputDuplicateFrameInformation duplicateFrameInformation;
// Try to get duplicated frame within given time
duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource);
if (i > 0)
{
// copy resource into memory that can be accessed by the CPU
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
// Get the desktop capture texture
var mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
System.Diagnostics.Debug.WriteLine(watch.Elapsed);
// Create Drawing.Bitmap
var bitmap = new System.Drawing.Bitmap(width, height, PixelFormat.Format32bppArgb);
var boundsRect = new System.Drawing.Rectangle(0, 0, width, height);
// Copy pixels from screen capture Texture to GDI bitmap
var mapDest = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat);
var sourcePtr = mapSource.DataPointer;
var destPtr = mapDest.Scan0;
for (int y = 0; y < height; y++)
{
// Iterate and write to bitmap...
I would like to resize the image much smaller than the actual screen size before processing it as a byte array. I do not need to save the image, just get at the bytes. I would like to do this relatively quickly and efficiently (e.g. leveraging GPU if possible).
I'm not able to scale during CopyResource, as the output dimensions are required to be the same as the input dimensions. Can I perform another copy from my screenTexture2D to scale? How exactly do I scale the resource - do I use a Swap Chain, Matrix transform, or something else?
If you are fine resizing to a power of two from the screen, you can do it by:
Create a smaller texture with RenderTarget/ShaderResource usage, and options GenerateMipMaps, same size of screen, mipcount > 1 (2 for having size /2, 3 for having /4...etc.).
Copy the first mipmap of the screen texture to the smaller texture
DeviceContext.GenerateMipMaps on the smaller texture
Copy the selected mimap of the smaller texture (1: /2, 2: /4...etc.) to the staging texture (that should also be declared smaller, i.e. same size as the mipmap that is going to be used)
A quick hack on the original code to generate a /2 texture would be like this:
[STAThread]
private static void Main()
{
// # of graphics card adapter
const int numAdapter = 0;
// # of output device (i.e. monitor)
const int numOutput = 0;
const string outputFileName = "ScreenCapture.bmp";
// Create DXGI Factory1
var factory = new Factory1();
var adapter = factory.GetAdapter1(numAdapter);
// Create device from Adapter
var device = new Device(adapter);
// Get DXGI.Output
var output = adapter.GetOutput(numOutput);
var output1 = output.QueryInterface<Output1>();
// Width/Height of desktop to capture
int width = output.Description.DesktopBounds.Width;
int height = output.Description.DesktopBounds.Height;
// Create Staging texture CPU-accessible
var textureDesc = new Texture2DDescription
{
CpuAccessFlags = CpuAccessFlags.Read,
BindFlags = BindFlags.None,
Format = Format.B8G8R8A8_UNorm,
Width = width/2,
Height = height/2,
OptionFlags = ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Staging
};
var stagingTexture = new Texture2D(device, textureDesc);
// Create Staging texture CPU-accessible
var smallerTextureDesc = new Texture2DDescription
{
CpuAccessFlags = CpuAccessFlags.None,
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
Format = Format.B8G8R8A8_UNorm,
Width = width,
Height = height,
OptionFlags = ResourceOptionFlags.GenerateMipMaps,
MipLevels = 4,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Default
};
var smallerTexture = new Texture2D(device, smallerTextureDesc);
var smallerTextureView = new ShaderResourceView(device, smallerTexture);
// Duplicate the output
var duplicatedOutput = output1.DuplicateOutput(device);
bool captureDone = false;
for (int i = 0; !captureDone; i++)
{
try
{
SharpDX.DXGI.Resource screenResource;
OutputDuplicateFrameInformation duplicateFrameInformation;
// Try to get duplicated frame within given time
duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource);
if (i > 0)
{
// copy resource into memory that can be accessed by the CPU
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopySubresourceRegion(screenTexture2D, 0, null, smallerTexture, 0);
// Generates the mipmap of the screen
device.ImmediateContext.GenerateMips(smallerTextureView);
// Copy the mipmap 1 of smallerTexture (size/2) to the staging texture
device.ImmediateContext.CopySubresourceRegion(smallerTexture, 1, null, stagingTexture, 0);
// Get the desktop capture texture
var mapSource = device.ImmediateContext.MapSubresource(stagingTexture, 0, MapMode.Read, MapFlags.None);
// Create Drawing.Bitmap
var bitmap = new System.Drawing.Bitmap(width/2, height/2, PixelFormat.Format32bppArgb);
var boundsRect = new System.Drawing.Rectangle(0, 0, width/2, height/2);
// Copy pixels from screen capture Texture to GDI bitmap
var mapDest = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat);
var sourcePtr = mapSource.DataPointer;
var destPtr = mapDest.Scan0;
for (int y = 0; y < height/2; y++)
{
// Copy a single line
Utilities.CopyMemory(destPtr, sourcePtr, width/2 * 4);
// Advance pointers
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
// Release source and dest locks
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(stagingTexture, 0);
// Save the output
bitmap.Save(outputFileName);
// Capture done
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch (SharpDXException e)
{
if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
throw e;
}
}
}
// Display the texture using system associated viewer
System.Diagnostics.Process.Start(Path.GetFullPath(Path.Combine(Environment.CurrentDirectory, outputFileName)));
// TODO: We should cleanp up all allocated COM objects here
}
You need to take your original source surface in GPU memory and Draw() it on to a smaller surface. This involves simple vector/pixel shaders, which some folks with simple needs would rather bypass.
I would look to see if someone made a sprite lib for sharpdx. It should be a common "thing"...or using Direct2D (which is much more fun). Since D2D is just a user-mode library over D3D, it interops with D3D very easily.
I've never used SharpDx, but fFrom memory you would do something like this:
1.) Create an ID2D1Device, wrapping your existing DXGI Device (make sure your dxgi device creation flag has D3D11_CREATE_DEVICE_BGRA_SUPPORT)
2.) Get the ID2D1DeviceContext from your ID2D1Device
3.) Wrap your source and destination DXGI surfaces into D2D bitmaps with ID2D1DeviceContext::CreateBitmapFromDxgiSurface
4.) ID2D1DeviceContext::SetTarget of your destination surface
5.) BeginDraw, ID2D1DeviceContext::DrawBitmap, passing your source D2D bitmap. EndDraw
6.) Save your destination
Here is a pixelate example...
d2d_device_context_h()->BeginDraw();
d2d_device_context_h()->SetTarget(mp_ppBitmap1.Get());
D2D1_SIZE_F rtSize = mp_ppBitmap1->GetSize();
rtSize.height *= (1.0f / cbpx.iPixelsize.y);
rtSize.width *= (1.0f / cbpx.iPixelsize.x);
D2D1_RECT_F rtRect = { 0.0f, 0.0f, rtSize.width, rtSize.height };
D2D1_SIZE_F rsSize = mp_ppBitmap0->GetSize();
D2D1_RECT_F rsRect = { 0.0f, 0.0f, rsSize.width, rsSize.height };
d2d_device_context_h()->DrawBitmap(mp_ppBitmap0.Get(), &rtRect, 1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_LINEAR, &rsRect);
d2d_device_context_h()->SetTarget(mp_ppBitmap0.Get());
d2d_device_context_h()->DrawBitmap(mp_ppBitmap1.Get(), &rsRect, 1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_NEAREST_NEIGHBOR, &rtRect);
d2d_device_context_h()->EndDraw();
Where iPixelsize.xy is the size of the "pixelated pixel", note that i just use linear interpolation when shrinking the bmp and NOT when i reenlarge. This will generate a pixelation effect.
Related
Is it possible to add an external image or text on an already jsc3d created 3d object.For example if any canvas imagedata needs to be stored on the created 3d object,then is it possible?
Yes, its possible.
If you look at the jsc3d implementation of Texture, you will see that a texture has already an underlying canvas.
Let say you have a canvas called "myTexture" and a Mesh called "myMesh", and to make it simple, you just only need a texture with a fixed size of 128x128 px, this will paint your canvas onto your mesh:
var canvas = document.getElementById('myTexture');
var context = canvas.getContext('2d');
var dim = 128;
var imgData = context.getImageData(0,0,dim,dim);
var data = imgData.data;
var size = data.length / 4;
var texture = new JSC3D.Texture;
texture.data = new Array(size);
var alpha;
for(var i=0, j=0; i<size; i++, j+=4) {
alpha = data[j + 3];
texture.data[i] = alpha << 24 | data[j] << 16 | data[j+1] << 8 | data[j+2];
if(alpha < 255)
texture.hasTransparency = true;
}
texture.width = dim;
texture.height = dim;
myMesh.setTexture(texture);
viewer.update();
The .data loop is taken from JSC3D.Texture.prototype.createFromImage (credits humu2009, creator of jsc3d).
I've been going through these tutorials (only 2 links allowed for me): https:// code.msdn.microsoft.com/Direct3D-Tutorial-Win32-829979ef
and reading through the Direct3D 11 Graphics Pipeline: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476882%28v=vs.85%29.aspx
I currently have a Pixel (aka. Fragment) Shader coded in HLSL, consisting of the following code:
//Pixel Shader input.
struct psInput
{
float4 Position: SV_POSITION;
float4 Color: COLOR;
};
//Pixel (aka. Fragment) Shader.
float4 PS(psInput input): SV_TARGET
{
return input.Color;
}
What I (think I) would like to do is multisample and access nearby pixel data for each pixel in my Pixel Shader so that I can perform a sort of custom anti-aliasing like FXAA (http://developer.download.nvidia.com/assets/gamedev/files/sdk/11/FXAA_WhitePaper.pdf). From my understanding, I need to pass a texture to HLSL using PSSetShaderResources for each render, but beyond that I have no idea. So, my question is:
How do I send nearby pixel data to a Pixel-Shader in Direct3D 11?
Being able to do this kind of thing would also be extremely beneficial to my understanding of how c++ and HLSL interact with each other beyond the standard "pass some float4's to the shader" that I find in tutorials. It seems that this is the most crucial aspect of D3D development, and yet I can't find very many examples of it online.
I've considered traditional MSAA (MultiSample Anti-Aliasing), but I can't find any information on how to do it successfully in D3D 11 beyond that I need to be using a "BitBlt" (bit-block transfer) model swap chain first. (See DXGI_SAMPLE_DESC1 and DXGI_SAMPLE_DESC; only a count of 1 and a quality of 0 (no AA) will result in things being drawn.) Additionally, I would like to know how to perform the above for general understanding in case I need it for other aspects of my project. Answers on how to perform MSAA in D3D 11 are welcome too though.
Please use D3D 11 and HLSL code only.
To do custom anti-aliasing like FXAA you'll need to render the scene to an offscreen render target:
-Create a ID3D11Texture2D with bind flags D3D11_BIND_RENDER_TARGET and D3D11_BIND_SHADER_RESOURCE
-Create a ID3D11ShaderResourceView and a ID3D11RenderTargetView for the texture created in step one.
-Render the scene to the ID3D11RenderTargetView created in step 2
-Set the backbuffer as render target and bind the ID3D11ShaderResourceView created in step 2 to the correct pixel shader slot.
-Render a fullscreen triangle covering the entire screen you'll be able to sample the texture containing the scene in the pixel shader (use the Load() function)
When you tried to do traditional MSAA did you remeber to set MultisampleEnable in the rasterizer state?
And again I answer my own question, sort of (never did use FXAA...). I am providing my answer here to be nice to those who are following my footsteps.
It turns out I was missing the depth stencil view for MSAA. You want SampleCount to be 1U for disabled MSAA, 2U for 2XMSAA, 4U for 4XMSAA, 8U for 8XMSAA, etc. (Use ID3D11Device::CheckMultisampleQualityLevels to "probe" for viable MSAA levels...) You pretty much always want to use a quality level of 0U for disabled MSAA and 1U for enabled MSAA.
Below is my working MSAA code (you should be able to fill in the rest). Note that I used DXGI_FORMAT_D24_UNORM_S8_UINT and D3D11_DSV_DIMENSION_TEXTURE2DMS, and that the Format values for the depth texture and depth stencil view are the same and the SampleCount and SampleQuality values are the same.
Good luck!
unsigned int SampleCount = 1U;
unsigned int SampleQuality = (SampleCount > 1U ? 1U : 0U);
//Create swap chain.
IDXGIFactory2* dxgiFactory2 = nullptr;
d3dResult = dxgiFactory->QueryInterface(__uuidof(IDXGIFactory2), reinterpret_cast<void**>(&dxgiFactory2));
if (dxgiFactory2)
{
//DirectX 11.1 or later.
d3dResult = D3DDevice->QueryInterface(__uuidof(ID3D11Device1), reinterpret_cast<void**>(&D3DDevice1));
if (SUCCEEDED(d3dResult))
{
D3DDeviceContext->QueryInterface(__uuidof(ID3D11DeviceContext1), reinterpret_cast<void**>(&D3DDeviceContext1));
}
DXGI_SWAP_CHAIN_DESC1 swapChain;
ZeroMemory(&swapChain, sizeof(swapChain));
swapChain.Width = width;
swapChain.Height = height;
swapChain.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChain.SampleDesc.Count = SampleCount;
swapChain.SampleDesc.Quality = SampleQuality;
swapChain.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChain.BufferCount = 2U;
d3dResult = dxgiFactory2->CreateSwapChainForHwnd(D3DDevice, w32Window, &swapChain, nullptr, nullptr, &SwapChain1);
if (SUCCEEDED(d3dResult))
{
d3dResult = SwapChain1->QueryInterface(__uuidof(IDXGISwapChain), reinterpret_cast<void**>(&SwapChain));
}
dxgiFactory2->Release();
}
else
{
//DirectX 11.0.
DXGI_SWAP_CHAIN_DESC swapChain;
ZeroMemory(&swapChain, sizeof(swapChain));
swapChain.BufferCount = 2U;
swapChain.BufferDesc.Width = width;
swapChain.BufferDesc.Height = height;
swapChain.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChain.BufferDesc.RefreshRate.Numerator = 60U;
swapChain.BufferDesc.RefreshRate.Denominator = 1U;
swapChain.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChain.OutputWindow = w32Window;
swapChain.SampleDesc.Count = SampleCount;
swapChain.SampleDesc.Quality = SampleQuality;
swapChain.Windowed = true;
d3dResult = dxgiFactory->CreateSwapChain(D3DDevice, &swapChain, &SwapChain);
}
//Disable Alt + Enter and Print Screen shortcuts.
dxgiFactory->MakeWindowAssociation(w32Window, DXGI_MWA_NO_PRINT_SCREEN | DXGI_MWA_NO_ALT_ENTER);
dxgiFactory->Release();
if (FAILED(d3dResult))
{
return false;
}
//Create render target view.
ID3D11Texture2D* backBuffer = nullptr;
d3dResult = SwapChain->GetBuffer(0U, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer));
if (FAILED(d3dResult))
{
return false;
}
d3dResult = D3DDevice->CreateRenderTargetView(backBuffer, nullptr, &RenderTargetView);
backBuffer->Release();
if (FAILED(d3dResult))
{
return false;
}
//Create depth stencil texture.
ID3D11Texture2D* DepthStencilTexture = nullptr;
D3D11_TEXTURE2D_DESC depthTextureLayout;
ZeroMemory(&depthTextureLayout, sizeof(depthTextureLayout));
depthTextureLayout.Width = width;
depthTextureLayout.Height = height;
depthTextureLayout.MipLevels = 1U;
depthTextureLayout.ArraySize = 1U;
depthTextureLayout.Usage = D3D11_USAGE_DEFAULT;
depthTextureLayout.CPUAccessFlags = 0U;
depthTextureLayout.MiscFlags = 0U;
depthTextureLayout.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthTextureLayout.SampleDesc.Count = SampleCount;
depthTextureLayout.SampleDesc.Quality = SampleQuality;
depthTextureLayout.BindFlags = D3D11_BIND_DEPTH_STENCIL;
d3dResult = D3DDevice->CreateTexture2D(&depthTextureLayout, nullptr, &DepthStencilTexture);
if (FAILED(d3dResult))
{
return false;
}
//Create depth stencil.
D3D11_DEPTH_STENCIL_DESC depthStencilLayout;
depthStencilLayout.DepthEnable = true;
depthStencilLayout.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilLayout.DepthFunc = D3D11_COMPARISON_LESS;
depthStencilLayout.StencilEnable = true;
depthStencilLayout.StencilReadMask = 0xFF;
depthStencilLayout.StencilWriteMask = 0xFF;
depthStencilLayout.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilLayout.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
depthStencilLayout.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilLayout.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthStencilLayout.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilLayout.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
depthStencilLayout.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilLayout.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
ID3D11DepthStencilState* depthStencilState;
D3DDevice->CreateDepthStencilState(&depthStencilLayout, &depthStencilState);
D3DDeviceContext->OMSetDepthStencilState(depthStencilState, 1U);
//Create depth stencil view.
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewLayout;
ZeroMemory(&depthStencilViewLayout, sizeof(depthStencilViewLayout));
depthStencilViewLayout.Format = depthTextureLayout.Format;
depthStencilViewLayout.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DMS;
depthStencilViewLayout.Texture2D.MipSlice = 0U;
d3dResult = D3DDevice->CreateDepthStencilView(DepthStencilTexture, &depthStencilViewLayout, &DepthStencilView);
DepthStencilTexture->Release();
if (FAILED(d3dResult))
{
return false;
}
//Set output-merger render targets.
D3DDeviceContext->OMSetRenderTargets(1U, &RenderTargetView, DepthStencilView);
I have currently the problem that a library creates a DX11 texture with BGRA pixel format.
But the displaying library can only display RGBA correctly. (This means the colors are swapped in the rendered image)
After looking around I found a simple for-loop to solve the problem, but the performance is not very good and scales bad with higher resolutions. I'm new to DirectX and maybe I just missed a simple function to do the converting.
// Get the image data
unsigned char* pDest = view->image->getPixels();
// Prepare source texture
ID3D11Texture2D* pTexture = static_cast<ID3D11Texture2D*>( tex );
// Get context
ID3D11DeviceContext* pContext = NULL;
dxDevice11->GetImmediateContext(&pContext);
// Copy data, fast operation
pContext->CopySubresourceRegion(texStaging, 0, 0, 0, 0, tex, 0, nullptr);
// Create mapping
D3D11_MAPPED_SUBRESOURCE mapped;
HRESULT hr = pContext->Map( texStaging, 0, D3D11_MAP_READ, 0, &mapped );
if ( FAILED( hr ) )
{
return;
}
// Calculate size
const size_t size = _width * _height * 4;
// Access pixel data
unsigned char* pSrc = static_cast<unsigned char*>( mapped.pData );
// Offsets
int offsetSrc = 0;
int offsetDst = 0;
int rowOffset = mapped.RowPitch % _width;
// Loop through it, BRGA to RGBA conversation
for (int row = 0; row < _height; ++row)
{
for (int col = 0; col < _width; ++col)
{
pDest[offsetDst] = pSrc[offsetSrc+2];
pDest[offsetDst+1] = pSrc[offsetSrc+1];
pDest[offsetDst+2] = pSrc[offsetSrc];
pDest[offsetDst+3] = pSrc[offsetSrc+3];
offsetSrc += 4;
offsetDst += 4;
}
// Adjuste offset
offsetSrc += rowOffset;
}
// Unmap texture
pContext->Unmap( texStaging, 0 );
Solution:
Texture2D txDiffuse : register(t0);
SamplerState texSampler : register(s0);
struct VSScreenQuadOutput
{
float4 Position : SV_POSITION;
float2 TexCoords0 : TEXCOORD0;
};
float4 PSMain(VSScreenQuadOutput input) : SV_Target
{
return txDiffuse.Sample(texSampler, input.TexCoords0).rgba;
}
Obviously iterating over a texture on you CPU is not the most effective way. If you know that colors in a texture are always swapped like that and you don't want to modify the texture itself in your C++ code, the most straightforward way would be to do it in the pixel shader. When you sample the texture, simply swap colors there. You won't even notice any performance drop.
I am new to DirectX and trying to use SharpDX to capture a screen shot using the Desktop Duplication API.
I am wondering if there is any easy way to create bitmap that I can use in CPU (i.e. save on file, etc.)
I am using the following code the get the desktop screen shot:
var factory = new SharpDX.DXGI.Factory1();
var adapter = factory.Adapters1[0];
var output = adapter.Outputs[0];
var device = new SharpDX.Direct3D11.Device(SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.BgraSupport |
DeviceCreationFlags.Debug);
var dev1 = device.QueryInterface<SharpDX.DXGI.Device1>();
var output1 = output.QueryInterface<Output1>();
var duplication = output1.DuplicateOutput(dev1);
OutputDuplicateFrameInformation frameInfo;
SharpDX.DXGI.Resource desktopResource;
duplication.AcquireNextFrame(50, out frameInfo, out desktopResource);
var desktopSurface = desktopResource.QueryInterface<Surface>();
can anyone please give me some idea on how can I create a bitmap object from the desktopSurface (DXGI.Surface instance)?
I've just completed this myself although I am not going to say much about this code!
public byte[] GetScreenData()
{
// We want to copy the texture from the back buffer so
// we don't hog it.
Texture2DDescription desc = BackBuffer.Description;
desc.CpuAccessFlags = CpuAccessFlags.Read;
desc.Usage = ResourceUsage.Staging;
desc.OptionFlags = ResourceOptionFlags.None;
desc.BindFlags = BindFlags.None;
byte[] data = null;
using (var texture = new Texture2D(DeviceDirect3D, desc))
{
DeviceContextDirect3D.CopyResource(BackBuffer, texture);
using (Surface surface = texture.QueryInterface<Surface>())
{
DataStream dataStream;
var map = surface.Map(SharpDX.DXGI.MapFlags.Read, out dataStream);
int lines = (int)(dataStream.Length / map.Pitch);
data = new byte[surface.Description.Width * surface.Description.Height * 4];
int dataCounter = 0;
// width of the surface - 4 bytes per pixel.
int actualWidth = surface.Description.Width * 4;
for (int y = 0; y < lines; y++)
{
for (int x = 0; x < map.Pitch; x++)
{
if (x < actualWidth)
{
data[dataCounter++] = dataStream.Read<byte>();
}
else
{
dataStream.Read<byte>();
}
}
}
dataStream.Dispose();
surface.Unmap();
}
}
return data;
}
This will get you a byte[] which can then be used to generate a bitmap.
The following is how I saved to a png Image.
using (var stream = await file.OpenAsync( Windows.Storage.FileAccessMode.ReadWrite ))
{
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
double dpi = DisplayProperties.LogicalDpi;
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Straight,
(uint)width, (uint)height, dpi, dpi, pixelData);
encoder.BitmapTransform.ScaledWidth = (uint)newWidth;
encoder.BitmapTransform.ScaledHeight = (uint)newHeight;
await encoder.FlushAsync();
waiter.Set();
}
I know this was answered a while ago, and maybe you figured it out by now :3 but if someone else gets stuck I hope this helps!
The MSDN page for the Desktop Duplication API tells us the format of the image:
DXGI provides a surface that contains a current desktop image through the new IDXGIOutputDuplication::AcquireNextFrame method. The format of the desktop image is always DXGI_FORMAT_B8G8R8A8_UNORM no matter what the current display mode is.
You can use the Surface.Map(MapFlags, out DataStream) method get access to the data on the CPU.
The code should look like* this:
DataStream dataStream;
desktopSurface.Map(MapFlags.Read, out dataStream);
for(int y = 0; y < surface.Description.Width; y++) {
for(int x = 0; x < surface.Description.Height; x++) {
// read DXGI_FORMAT_B8G8R8A8_UNORM pixel:
byte b = dataStream.Read<byte>();
byte g = dataStream.Read<byte>();
byte r = dataStream.Read<byte>();
byte a = dataStream.Read<byte>();
// color (r, g, b, a) and pixel position (x, y) are available
// TODO: write to bitmap or process otherwise
}
}
desktopSurface.Unmap();
*Disclaimer: I don't have a Windows 8 installation at hand, I'm only following the documentation. I hope this works :)
The below code helps me to convert OpenGL output to JPEG image using libjpg but the resultant image is flipped vertical...
The code works perfect but the final image is flipped I dont know why ?!
unsigned char *pdata = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pdata);
FILE *outfile;
if ((outfile = fopen("sample.jpeg", "wb")) == NULL) {
printf("can't open %s");
exit(1);
}
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, outfile);
cinfo.image_width = width;
cinfo.image_height = height;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
/*set the quality [0..100] */
jpeg_set_quality (&cinfo, 100, true);
jpeg_start_compress(&cinfo, true);
JSAMPROW row_pointer;
int row_stride = width * 3;
while (cinfo.next_scanline < cinfo.image_height) {
row_pointer = (JSAMPROW) &pdata[cinfo.next_scanline*row_stride];
jpeg_write_scanlines(&cinfo, &row_pointer, 1);
}
jpeg_finish_compress(&cinfo);
fclose(outfile);
jpeg_destroy_compress(&cinfo);
OpenGL's coordinate system has the origin in the lower left corner of the image. LIBJPEG assumes that the origin of the image is in the upper left corner of the image. Make the following change to fix your code:
while (cinfo.next_scanline < cinfo.image_height)
{
row_pointer = (JSAMPROW) &pdata[(cinfo.image_height-1-cinfo.next_scanline)*row_stride];
jpeg_write_scanlines(&cinfo, &row_pointer, 1);
}