How to create bitmap from Surface (SharpDX) - directx-11

I am new to DirectX and trying to use SharpDX to capture a screen shot using the Desktop Duplication API.
I am wondering if there is any easy way to create bitmap that I can use in CPU (i.e. save on file, etc.)
I am using the following code the get the desktop screen shot:
var factory = new SharpDX.DXGI.Factory1();
var adapter = factory.Adapters1[0];
var output = adapter.Outputs[0];
var device = new SharpDX.Direct3D11.Device(SharpDX.Direct3D.DriverType.Hardware,
DeviceCreationFlags.BgraSupport |
DeviceCreationFlags.Debug);
var dev1 = device.QueryInterface<SharpDX.DXGI.Device1>();
var output1 = output.QueryInterface<Output1>();
var duplication = output1.DuplicateOutput(dev1);
OutputDuplicateFrameInformation frameInfo;
SharpDX.DXGI.Resource desktopResource;
duplication.AcquireNextFrame(50, out frameInfo, out desktopResource);
var desktopSurface = desktopResource.QueryInterface<Surface>();
can anyone please give me some idea on how can I create a bitmap object from the desktopSurface (DXGI.Surface instance)?

I've just completed this myself although I am not going to say much about this code!
public byte[] GetScreenData()
{
// We want to copy the texture from the back buffer so
// we don't hog it.
Texture2DDescription desc = BackBuffer.Description;
desc.CpuAccessFlags = CpuAccessFlags.Read;
desc.Usage = ResourceUsage.Staging;
desc.OptionFlags = ResourceOptionFlags.None;
desc.BindFlags = BindFlags.None;
byte[] data = null;
using (var texture = new Texture2D(DeviceDirect3D, desc))
{
DeviceContextDirect3D.CopyResource(BackBuffer, texture);
using (Surface surface = texture.QueryInterface<Surface>())
{
DataStream dataStream;
var map = surface.Map(SharpDX.DXGI.MapFlags.Read, out dataStream);
int lines = (int)(dataStream.Length / map.Pitch);
data = new byte[surface.Description.Width * surface.Description.Height * 4];
int dataCounter = 0;
// width of the surface - 4 bytes per pixel.
int actualWidth = surface.Description.Width * 4;
for (int y = 0; y < lines; y++)
{
for (int x = 0; x < map.Pitch; x++)
{
if (x < actualWidth)
{
data[dataCounter++] = dataStream.Read<byte>();
}
else
{
dataStream.Read<byte>();
}
}
}
dataStream.Dispose();
surface.Unmap();
}
}
return data;
}
This will get you a byte[] which can then be used to generate a bitmap.
The following is how I saved to a png Image.
using (var stream = await file.OpenAsync( Windows.Storage.FileAccessMode.ReadWrite ))
{
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
double dpi = DisplayProperties.LogicalDpi;
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Straight,
(uint)width, (uint)height, dpi, dpi, pixelData);
encoder.BitmapTransform.ScaledWidth = (uint)newWidth;
encoder.BitmapTransform.ScaledHeight = (uint)newHeight;
await encoder.FlushAsync();
waiter.Set();
}
I know this was answered a while ago, and maybe you figured it out by now :3 but if someone else gets stuck I hope this helps!

The MSDN page for the Desktop Duplication API tells us the format of the image:
DXGI provides a surface that contains a current desktop image through the new IDXGIOutputDuplication::AcquireNextFrame method. The format of the desktop image is always DXGI_FORMAT_B8G8R8A8_UNORM no matter what the current display mode is.
You can use the Surface.Map(MapFlags, out DataStream) method get access to the data on the CPU.
The code should look like* this:
DataStream dataStream;
desktopSurface.Map(MapFlags.Read, out dataStream);
for(int y = 0; y < surface.Description.Width; y++) {
for(int x = 0; x < surface.Description.Height; x++) {
// read DXGI_FORMAT_B8G8R8A8_UNORM pixel:
byte b = dataStream.Read<byte>();
byte g = dataStream.Read<byte>();
byte r = dataStream.Read<byte>();
byte a = dataStream.Read<byte>();
// color (r, g, b, a) and pixel position (x, y) are available
// TODO: write to bitmap or process otherwise
}
}
desktopSurface.Unmap();
*Disclaimer: I don't have a Windows 8 installation at hand, I'm only following the documentation. I hope this works :)

Related

How to split/divide image in parts in Flutter

How to split an image into equal-sized parts? Just taking an image from asset and splitting it into equal parts in a grid-like manner, so that each image part can be used as a separate image.
Something similar to picture
You can use this package (https://pub.dev/packages/image) to crop the image from asset with the function copyCrop. Save them to List then display like your example.
Edit:
I think you know how to split your image and display them like your example if you know how to crop your image so I just show you how to change from image from asset to image of image package for cropping.
List<Image> splitImage(List<int> input) {
// convert image to image from image package
imglib.Image image = imglib.decodeImage(input);
int x = 0, y = 0;
int width = (image.width / 3).round();
int height = (image.height / 3).round();
// split image to parts
List<imglib.Image> parts = List<imglib.Image>();
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
parts.add(imglib.copyCrop(image, x, y, width, height));
x += width;
}
x = 0;
y += height;
}
// convert image from image package to Image Widget to display
List<Image> output = List<Image>();
for (var img in parts) {
output.add(Image.memory(imglib.encodeJpg(img)));
}
return output;
}
Remember to add this import 'package:image/image.dart' as imglib;

image addition on already created jsc3d 3d object

Is it possible to add an external image or text on an already jsc3d created 3d object.For example if any canvas imagedata needs to be stored on the created 3d object,then is it possible?
Yes, its possible.
If you look at the jsc3d implementation of Texture, you will see that a texture has already an underlying canvas.
Let say you have a canvas called "myTexture" and a Mesh called "myMesh", and to make it simple, you just only need a texture with a fixed size of 128x128 px, this will paint your canvas onto your mesh:
var canvas = document.getElementById('myTexture');
var context = canvas.getContext('2d');
var dim = 128;
var imgData = context.getImageData(0,0,dim,dim);
var data = imgData.data;
var size = data.length / 4;
var texture = new JSC3D.Texture;
texture.data = new Array(size);
var alpha;
for(var i=0, j=0; i<size; i++, j+=4) {
alpha = data[j + 3];
texture.data[i] = alpha << 24 | data[j] << 16 | data[j+1] << 8 | data[j+2];
if(alpha < 255)
texture.hasTransparency = true;
}
texture.width = dim;
texture.height = dim;
myMesh.setTexture(texture);
viewer.update();
The .data loop is taken from JSC3D.Texture.prototype.createFromImage (credits humu2009, creator of jsc3d).

Resizing a DXGI Resource or Texture2D in SharpDX

I want to resize a screen captured using the Desktop Duplication API in SharpDX. I am using the Screen Capture sample code from the SharpDX Samples repository, relevant portion follows:.
SharpDX.DXGI.Resource screenResource;
OutputDuplicateFrameInformation duplicateFrameInformation;
// Try to get duplicated frame within given time
duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource);
if (i > 0)
{
// copy resource into memory that can be accessed by the CPU
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
// Get the desktop capture texture
var mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
System.Diagnostics.Debug.WriteLine(watch.Elapsed);
// Create Drawing.Bitmap
var bitmap = new System.Drawing.Bitmap(width, height, PixelFormat.Format32bppArgb);
var boundsRect = new System.Drawing.Rectangle(0, 0, width, height);
// Copy pixels from screen capture Texture to GDI bitmap
var mapDest = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat);
var sourcePtr = mapSource.DataPointer;
var destPtr = mapDest.Scan0;
for (int y = 0; y < height; y++)
{
// Iterate and write to bitmap...
I would like to resize the image much smaller than the actual screen size before processing it as a byte array. I do not need to save the image, just get at the bytes. I would like to do this relatively quickly and efficiently (e.g. leveraging GPU if possible).
I'm not able to scale during CopyResource, as the output dimensions are required to be the same as the input dimensions. Can I perform another copy from my screenTexture2D to scale? How exactly do I scale the resource - do I use a Swap Chain, Matrix transform, or something else?
If you are fine resizing to a power of two from the screen, you can do it by:
Create a smaller texture with RenderTarget/ShaderResource usage, and options GenerateMipMaps, same size of screen, mipcount > 1 (2 for having size /2, 3 for having /4...etc.).
Copy the first mipmap of the screen texture to the smaller texture
DeviceContext.GenerateMipMaps on the smaller texture
Copy the selected mimap of the smaller texture (1: /2, 2: /4...etc.) to the staging texture (that should also be declared smaller, i.e. same size as the mipmap that is going to be used)
A quick hack on the original code to generate a /2 texture would be like this:
[STAThread]
private static void Main()
{
// # of graphics card adapter
const int numAdapter = 0;
// # of output device (i.e. monitor)
const int numOutput = 0;
const string outputFileName = "ScreenCapture.bmp";
// Create DXGI Factory1
var factory = new Factory1();
var adapter = factory.GetAdapter1(numAdapter);
// Create device from Adapter
var device = new Device(adapter);
// Get DXGI.Output
var output = adapter.GetOutput(numOutput);
var output1 = output.QueryInterface<Output1>();
// Width/Height of desktop to capture
int width = output.Description.DesktopBounds.Width;
int height = output.Description.DesktopBounds.Height;
// Create Staging texture CPU-accessible
var textureDesc = new Texture2DDescription
{
CpuAccessFlags = CpuAccessFlags.Read,
BindFlags = BindFlags.None,
Format = Format.B8G8R8A8_UNorm,
Width = width/2,
Height = height/2,
OptionFlags = ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Staging
};
var stagingTexture = new Texture2D(device, textureDesc);
// Create Staging texture CPU-accessible
var smallerTextureDesc = new Texture2DDescription
{
CpuAccessFlags = CpuAccessFlags.None,
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
Format = Format.B8G8R8A8_UNorm,
Width = width,
Height = height,
OptionFlags = ResourceOptionFlags.GenerateMipMaps,
MipLevels = 4,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Default
};
var smallerTexture = new Texture2D(device, smallerTextureDesc);
var smallerTextureView = new ShaderResourceView(device, smallerTexture);
// Duplicate the output
var duplicatedOutput = output1.DuplicateOutput(device);
bool captureDone = false;
for (int i = 0; !captureDone; i++)
{
try
{
SharpDX.DXGI.Resource screenResource;
OutputDuplicateFrameInformation duplicateFrameInformation;
// Try to get duplicated frame within given time
duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource);
if (i > 0)
{
// copy resource into memory that can be accessed by the CPU
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopySubresourceRegion(screenTexture2D, 0, null, smallerTexture, 0);
// Generates the mipmap of the screen
device.ImmediateContext.GenerateMips(smallerTextureView);
// Copy the mipmap 1 of smallerTexture (size/2) to the staging texture
device.ImmediateContext.CopySubresourceRegion(smallerTexture, 1, null, stagingTexture, 0);
// Get the desktop capture texture
var mapSource = device.ImmediateContext.MapSubresource(stagingTexture, 0, MapMode.Read, MapFlags.None);
// Create Drawing.Bitmap
var bitmap = new System.Drawing.Bitmap(width/2, height/2, PixelFormat.Format32bppArgb);
var boundsRect = new System.Drawing.Rectangle(0, 0, width/2, height/2);
// Copy pixels from screen capture Texture to GDI bitmap
var mapDest = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat);
var sourcePtr = mapSource.DataPointer;
var destPtr = mapDest.Scan0;
for (int y = 0; y < height/2; y++)
{
// Copy a single line
Utilities.CopyMemory(destPtr, sourcePtr, width/2 * 4);
// Advance pointers
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
// Release source and dest locks
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(stagingTexture, 0);
// Save the output
bitmap.Save(outputFileName);
// Capture done
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch (SharpDXException e)
{
if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
throw e;
}
}
}
// Display the texture using system associated viewer
System.Diagnostics.Process.Start(Path.GetFullPath(Path.Combine(Environment.CurrentDirectory, outputFileName)));
// TODO: We should cleanp up all allocated COM objects here
}
You need to take your original source surface in GPU memory and Draw() it on to a smaller surface. This involves simple vector/pixel shaders, which some folks with simple needs would rather bypass.
I would look to see if someone made a sprite lib for sharpdx. It should be a common "thing"...or using Direct2D (which is much more fun). Since D2D is just a user-mode library over D3D, it interops with D3D very easily.
I've never used SharpDx, but fFrom memory you would do something like this:
1.) Create an ID2D1Device, wrapping your existing DXGI Device (make sure your dxgi device creation flag has D3D11_CREATE_DEVICE_BGRA_SUPPORT)
2.) Get the ID2D1DeviceContext from your ID2D1Device
3.) Wrap your source and destination DXGI surfaces into D2D bitmaps with ID2D1DeviceContext::CreateBitmapFromDxgiSurface
4.) ID2D1DeviceContext::SetTarget of your destination surface
5.) BeginDraw, ID2D1DeviceContext::DrawBitmap, passing your source D2D bitmap. EndDraw
6.) Save your destination
Here is a pixelate example...
d2d_device_context_h()->BeginDraw();
d2d_device_context_h()->SetTarget(mp_ppBitmap1.Get());
D2D1_SIZE_F rtSize = mp_ppBitmap1->GetSize();
rtSize.height *= (1.0f / cbpx.iPixelsize.y);
rtSize.width *= (1.0f / cbpx.iPixelsize.x);
D2D1_RECT_F rtRect = { 0.0f, 0.0f, rtSize.width, rtSize.height };
D2D1_SIZE_F rsSize = mp_ppBitmap0->GetSize();
D2D1_RECT_F rsRect = { 0.0f, 0.0f, rsSize.width, rsSize.height };
d2d_device_context_h()->DrawBitmap(mp_ppBitmap0.Get(), &rtRect, 1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_LINEAR, &rsRect);
d2d_device_context_h()->SetTarget(mp_ppBitmap0.Get());
d2d_device_context_h()->DrawBitmap(mp_ppBitmap1.Get(), &rsRect, 1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_NEAREST_NEIGHBOR, &rtRect);
d2d_device_context_h()->EndDraw();
Where iPixelsize.xy is the size of the "pixelated pixel", note that i just use linear interpolation when shrinking the bmp and NOT when i reenlarge. This will generate a pixelation effect.

Generating an block/voxel terrain efficiently without FPS drops?

I'm having a little trouble generating/displaying a terrain using Three.JS without major FPS drops. Here's the code I wrote to create each block and set the correct position:
var TO_METERS = 10;
var testOb = [];
var blockGeometry = new THREE.CubeGeometry(TO_METERS, TO_METERS, TO_METERS);
var blockMat = new THREE.MeshLambertMaterial({color: 0xFFFFFF, wrapAround: true, side: THREE.FrontSide, shading: THREE.FlatShading});
function loadChunk(startX, startY, startZ) {
var yVar = 0;
var zVar = 0;
var blockCo = 0;
var combinedGeometry = new THREE.CubeGeometry(0, 0, 0);
for (var x = 0; x <= 4999; x++) {
testOb[x] = new THREE.Mesh();
testOb[x].geometry = blockGeometry;
if (blockCo == 10) {
blockCo = 0;
if (zVar == 90) {
yVar += TO_METERS;
zVar = 0;
}
else {
zVar += TO_METERS;
}
}
testOb[x].position.x = (blockCo * TO_METERS) + startX;
testOb[x].position.y = (yVar - 500) + startY;
testOb[x].position.z = zVar + startZ;
testOb[x].castShadow = true;
blockCo++;
THREE.GeometryUtils.merge(combinedGeometry, testOb[x]);
}
var cMesh = new Physijs.BoxMesh(combinedGeometry, blockMat, 0);
scene.add(cMesh);
}
Basically it creates each block, sets the position and merges them together using THREE.GeometryUtils.merge to make up a "chunk" (a rectangle) MineCraft style.
I'm pretty sure the large number of individual blocks that make up each chunk is causing the low FPS. With only 10 chunks the FPS is fine. If I add any more the FPS drops drastically.
One thought I had was to use a WebWorker to do the processing, but of cause that isn't possible as I can't add the chunks or even use Three.JS within it. That also would only help the load time, not the FPS problem I'm having.
If anyone has any ideas how I would go about fixing this problem, I would really appreciate it. :) Maybe it would be possible to hide blocks which the camera can't see? Or I might just totally be doing it the wrong way. Thanks!

Pixel reordering is wrong when trying to process and display image copy with lower res

I'm currently making an application using processing intended to take an image and apply 8bit style processing to it: that is to make it look pixelated. To do this it has a method that take a style and window size as parameters (style is the shape in which the window is to be displayed - rect, ellipse, cross etc, and window size is a number between 1-10 squared) - to produce results similar to the iphone app pxl ( http://itunes.apple.com/us/app/pxl./id499620829?mt=8 ). This method then counts through the image's pixels, window by window averages the colour of the window and displays a rect(or which every shape/style chosen) at the equivalent space on the other side of the sketch window (the sketch when run is supposed to display the original image on the left mirror it with the processed version on the right).
The problem Im having is when drawing the averaged colour rects, the order in which they display becomes skewed..
Although the results are rather amusing, they are not what I want. Here the code:
//=========================================================
// GLOBAL VARIABLES
//=========================================================
PImage img;
public int avR, avG, avB;
private final int BLOCKS = 0, DOTS = 1, VERTICAL_CROSSES = 2, HORIZONTAL_CROSSES = 3;
public sRGB styleColour;
//=========================================================
// METHODS FOR AVERAGING WINDOW COLOURS, CREATING AN
// 8 BIT REPRESENTATION OF THE IMAGE AND LOADING AN
// IMAGE
//=========================================================
public sRGB averageWindowColour(color [] c){
// RGB Variables
float r = 0;
float g = 0;
float b = 0;
// Iterator
int i = 0;
int sizeOfWindow = c.length;
// Count through the window's pixels, store the
// red, green and blue values in the RGB variables
// and sum them into the average variables
for(i = 0; i < c.length; i++){
r = red (c[i]);
g = green(c[i]);
b = blue (c[i]);
avR += r;
avG += g;
avB += b;
}
// Divide the sum of the red, green and blue
// values by the number of pixels in the window
// to obtain the average
avR = avR / sizeOfWindow;
avG = avG / sizeOfWindow;
avB = avB / sizeOfWindow;
// Return the colour
return new sRGB(avR,avG,avB);
}
public void eightBitIT(int style, int windowSize){
img.loadPixels();
for(int wx = 0; wx < img.width; wx += (sqrt(windowSize))){
for(int wy = 0; wy < img.height; wy += (sqrt(windowSize))){
color [] tempCols = new color[windowSize];
int i = 0;
for(int x = 0; x < (sqrt(windowSize)); x ++){
for(int y = 0; y < (sqrt(windowSize)); y ++){
int loc = (wx+x) + (y+wy)*(img.width-windowSize);
tempCols[i] = img.pixels[loc];
// println("Window loc X: "+(wx+(img.width+5))+" Window loc Y: "+(wy+5)+" Window pix X: "+x+" Window Pix Y: "+y);
i++;
}
}
//this is ment to be in a switch test (0 = rect, 1 ellipse etc)
styleColour = new sRGB(averageWindowColour(tempCols));
//println("R: "+ red(styleColour.returnColourScaled())+" G: "+green(styleColour.returnColourScaled())+" B: "+blue(styleColour.returnColourScaled()));
rectMode(CORNER);
noStroke();
fill(styleColour.returnColourScaled());
//println("Rect Loc X: "+(wx+(img.width+5))+" Y: "+(wy+5));
ellipse(wx+(img.width+5),wy+5,sqrt(windowSize),sqrt(windowSize));
}
}
}
public PImage load(String s){
PImage temp = loadImage(s);
temp.resize(600,470);
return temp;
}
void setup(){
background(0);
// Load the image and set size of screen to its size*2 + the borders
// and display the image.
img = loadImage("oscilloscope.jpg");
size(img.width*2+15,(img.height+10));
frameRate(25);
image(img,5,5);
// Draw the borders
strokeWeight(5);
stroke(255);
rectMode(CORNERS);
noFill();
rect(2.5,2.5,img.width+3,height-3);
rect(img.width+2.5,2.5,width-3,height-3);
stroke(255,0,0);
strokeWeight(1);
rect(5,5,9,9); //window example
// process the image
eightBitIT(BLOCKS, 16);
}
void draw(){
//eightBitIT(BLOCKS, 4);
//println("X: "+mouseX+" Y: "+mouseY);
}
This has been bugging me for a while now as I can't see where in my code im offsetting the coordinates so they display like this. I know its probably something very trivial but I can seem to work it out. If anyone can spot why this skewed reordering is happening i would be much obliged as i have quite a lot of other ideas i want to implement and this is holding me back...
Thanks,

Resources