Convert from Android.Media.Image to Android.Graphics.Bitmap (possibily with Xamarin) - xamarin

I have a Android.Media.Image frame acquired from Camera2 API Preview (in a Xamarin app), that I want to process.
The image format is YUV_420_888.
In a previous iteration I was able to get a Android.Graphics.Bitmap from the CameraPreview (but not anymore), and with that bitmap I was able to:
crop it
convert it to JPEG, save it or send it to server
convert it to a Mat to process with Emgu in real time
I searched for how to convert from Android.Media.Image to Android.Graphics.Bitmap thinking it was going to be quite straightforward, but it's not.
I found mostly java code that I tried to convert to C# Xamarin but without success.
First step is supposed to be to convert to a YuvImage (found 2 different versions, hence the comments):
private static byte[] YUV_420_888toNV21(Android.Media.Image image)
{
byte[] nv21;
ByteBuffer yBuffer = image.GetPlanes()[0].Buffer;
ByteBuffer uBuffer = image.GetPlanes()[1].Buffer;
ByteBuffer vBuffer = image.GetPlanes()[2].Buffer;
int ySize = yBuffer.Remaining();
int uSize = uBuffer.Remaining();
int vSize = vBuffer.Remaining();
nv21 = new byte[ySize + uSize + vSize];
//yBuffer.Get(nv21, 0, ySize);
//vBuffer.Get(nv21, ySize, vSize);
//uBuffer.Get(nv21, ySize + vSize, uSize);
yBuffer.Get(nv21, 0, ySize);
uBuffer.Get(nv21, ySize, uSize);
vBuffer.Get(nv21, ySize + uSize, vSize);
return nv21;
}
And then the saving:
var bbitmap = YUV_420_888toNV21(image);
YuvImage yuv = new YuvImage(bbitmap, ImageFormatType.Nv21, image.Width, image.Height, null);
var ms = new MemoryStream();
yuv.CompressToJpeg(new Android.Graphics.Rect(0, 0, image.Width, image.Height), 100, ms);
File.WriteAllBytes("/sdcard/Download/testFrame.jpg", ms.ToArray());
But this is the result:
This was just a test to actually save and check the captured frame, because at the end of this I still don't have a Bitmap (even if it was working).
In cas of real time processing, I don't want/need the JPG conversion becaus it would only slow down everything.
I'd like to convert the Image to a Bitmap, crop it and feed it to Emgu as a Mat.
Any suggestion?
Thanks

Related

How to render the PDF page as EMF or meta file in PDFium

Currently I have generated the PDF page as bitmap by using FPDF_RenderPageBitmap method.
is there any method for rendering the PDF page as EMF or as metafile in PDFium?
In the pdfium project there is a sample folder that has a pdfium_test.cc file that has examples of output in different formats. PNG, EMF, BMP, TXT, and PPM are all in the at the present time.
The current code to render an EMF
void WriteEmf(FPDF_PAGE page, const char* pdf_name, int num) {
int width = static_cast<int>(FPDF_GetPageWidth(page));
int height = static_cast<int>(FPDF_GetPageHeight(page));
char filename[256];
snprintf(filename, sizeof(filename), "%s.%d.emf", pdf_name, num);
HDC dc = CreateEnhMetaFileA(nullptr, filename, nullptr, nullptr);
HRGN rgn = CreateRectRgn(0, 0, width, height);
SelectClipRgn(dc, rgn);
DeleteObject(rgn);
SelectObject(dc, GetStockObject(NULL_PEN));
SelectObject(dc, GetStockObject(WHITE_BRUSH));
// If a PS_NULL pen is used, the dimensions of the rectangle are 1 pixel less.
Rectangle(dc, 0, 0, width + 1, height + 1);
FPDF_RenderPage(dc, page, 0, 0, width, height, 0,
FPDF_ANNOT | FPDF_PRINTING | FPDF_NO_CATCH);
DeleteEnhMetaFile(CloseEnhMetaFile(dc));
}
On Windows, you want to call FPDF_RenderPage and pass in the HDC this should allow you to get the EMF data out. You can see the chromium printing code for example usage. There is also FPDF_SetPrintMode which lets you set different modes.

Covert xamarin.image into base 64 format

I need to Convert a xamarin forms image into a base64 format, Can anyone help me with this?
This is how i've being trying to do it, but it doesent work.
var inputStream = signatureImage.Source.GetValue(UriImageSource.UriProperty);
//Getting Stream as a Memorystream
var signatureMemoryStream = inputStream as MemoryStream;
if (signatureMemoryStream == null)
{
signatureMemoryStream = new MemoryStream();
inputStream.CopyTo(signatureMemoryStream);
}
//Adding memorystream into a byte array
var byteArray = signatureMemoryStream.ToArray();
//Converting byte array into Base64 string
base64String = Convert.ToBase64String(byteArray);
"signatureImage" is the image name.
Once you get your file path , you can use the following code that worked for me.
var stream = file.GetStream();
var bytes = new byte [stream.Length];
await stream.ReadAsync(bytes, 0, (int)stream.Length);
string base64 = System.Convert.ToBase64String(bytes);
I found it here
Image is just a control in Xamarin forms to display the image.
It is not something from which you can get your image byte array out.
You will be better of using the Media Plugin and save it to disk. Then load it via memory stream and convert.
You can also use FFImageLoading. It has 2 methods which can be of use for you :
GetImageAsJpgAsync(int quality = 90, int desiredWidth = 0, int desiredHeight = 0)
GetImageAsPngAsync(int desiredWidth = 0, int desiredHeight = 0)
The SO question - Convert Image into byte array in Xamarin.Forms shows how to do it in Platform specific code here.
The forum thread (Convert Image to byte[]) has a good discussion on why you can't get it from the control.
You can do this as below as well
var base64String = Convert.ToBase64String(File.ReadAllBytes(file.Path))

Image to Byte array conversion

I need to convert image to Byte array as I have to upload to server, have seen many articles but i am not able to understand ho wit is done, forgive me if you think my question is up-to be asked in this forum
If you have a WriteableBitmap, for example called LoadedPhoto, you can try:
using (MemoryStream ms = new MemoryStream()
{
LoadedPhoto.SaveJpeg(ms, LoadedPhoto.PixelWidth, LoadedPhoto.PixelHeight, 0, 95);
ms.Seek(0, 0);
byte[] data = new byte[ms.Length];
ms.Read(data, 0, data.Length);
ms.Close();
}
If you're using WriteableBitmapEx library from CodePlex, it can do a conversion to byte array, too
http://writeablebitmapex.codeplex.com/
I assume that it is a Bitmap (however it works for all image types) :
System.Drawing.Bitmap bmp = GetYourImage();
System.IO.MemoryStream stream = new System.IO.MemoryStream();
bmp.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp);
stream.Position = 0;
byte[] data = stream.ToArray();

UINT16 monochrome image to 8bit monochrome Qimage using freeImage

I want to convert a UINT16 monochrome image to a 8 bits image, in C++.
I have that image in a
char *buffer;
I'd like to give the new converted buffer to a QImage (Qt).
I'm trying with freeImagePlus
fipImage fimage;
if (fimage.loadfromMemory(...) == false)
//error
loadfromMemory needs a fipMemoryIO adress:
loadfromMemory(fipMemoryIO &memIO, int flag = 0)
So I do
fipImage fimage;
BYTE *buf = (BYTE*)malloc(gimage.GetBufferLength() * sizeof(BYTE));
// 'buf' is empty, I have to fill it with 'buffer' content
// how can I do it?
fipMemoryIO memIO(buf, gimage.GetBufferLength());
fimage.loadFromMemory(memIO);
if (fimage.convertTo8Bits() == true)
cout << "Good";
Then I would do something like
fimage.saveToMemory(...
or
fimage.saveToHandle(...
I don't understand what is a FREE_IMAGE_FORMAT, which is the first argument to any of those two functions. I can't find information of those types in the freeImage documentation.
Then I'd finish with
imageQt = new QImage(destiny, dimX, dimY, QImage::Format_Indexed8);
How can I fill 'buf' with the content of the initial buffer?
And get the data from the fipImage to a uchar* data for a QImage?
Thanks.
The conversion is simple to do in plain old C++, no need for external libraries unless they are significantly faster and you care about such a speedup. Below is how I'd do the conversion, at least as a first cut. The data is converted inside of the input buffer, since the output is smaller than the input.
QImage from16Bit(void * buffer, int width, int height) {
int size = width*height*2; // length of data in buffer, in bytes
quint8 * output = reinterpret_cast<quint8*>(buffer);
const quint16 * input = reinterpret_cast<const quint16*>(buffer);
if (!size) return QImage;
do {
*output++ = *input++ >> 8;
} while (size -= 2);
return QImage(output, width, height, QImage::Format_Indexed8);
}

Images saved with D3DXSaveSurfaceToFile will open in Paint, not Photoshop

I'm using D3DXSaveSurfaceToFile to save windowed Direct3D 9 surfaces to PNG, BMP and JPG files. There are no errors returned from the D3DXSaveSurfaceToFile call and all files open fine in Windows Photo Viewer and Paint. But they will not open in a higher end image editing program such as Paint Shop Pro or Photoshop. The error messages from these programs basically say that the file is corrupted. If I open the files in Paint and then save them in the same file format with a different file name, then they'll open fine in the other programs.
This leads me to believe that D3DXSaveSurfaceToFile is writing out non-standard versions of these file formats. Is there some way I can get this function to write out files that can be opened in programs like Photoshop without the intermediate step of resaving the files in Paint? Or is there another function I should be using that does a better job of saving a Direct3D surfaces to an image?
Take a look at the file in a image meta viewer. What does it tell you?
Unfortunately D3DXSaveSurfaceToFile() isn't the most stable (it's also exceptionally slow). Personally I do something like the below code. It works even on Anti-aliased displays by doing an offscreen render to take the screenshot then getting it into a buffer. It also supports only the most common of the pixel formats. Sorry for any errors in it, pulled it out of an app I used to work on.
You can then, in your code and probably in another thread, then convert said 'bitmap' to anything you like using a variety of different code.
void HandleScreenshot(IDirect3DDevice9* device)
{
DWORD tcHandleScreenshot = GetTickCount();
LPDIRECT3DSURFACE9 pd3dsBack = NULL;
LPDIRECT3DSURFACE9 pd3dsTemp = NULL;
// Grab the back buffer into a surface
if ( SUCCEEDED ( device->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pd3dsBack) ))
{
D3DSURFACE_DESC desc;
pd3dsBack->GetDesc(&desc);
LPDIRECT3DSURFACE9 pd3dsCopy = NULL;
if (desc.MultiSampleType != D3DMULTISAMPLE_NONE)
{
if (SUCCEEDED(device->CreateRenderTarget(desc.Width, desc.Height, desc.Format, D3DMULTISAMPLE_NONE, 0, FALSE, &pd3dsCopy, NULL)))
{
if (SUCCEEDED(device->StretchRect(pd3dsBack, NULL, pd3dsCopy, NULL, D3DTEXF_NONE)))
{
pd3dsBack->Release();
pd3dsBack = pd3dsCopy;
}
else
{
pd3dsCopy->Release();
}
}
}
if (SUCCEEDED(device->CreateOffscreenPlainSurface(desc.Width, desc.Height, desc.Format, D3DPOOL_SYSTEMMEM, &pd3dsTemp, NULL)))
{
DWORD tmpTimeGRTD = GetTickCount();
if (SUCCEEDED(device->GetRenderTargetData(pd3dsBack, pd3dsTemp)))
{
D3DLOCKED_RECT lockedSrcRect;
if (SUCCEEDED(pd3dsTemp->LockRect(&lockedSrcRect, NULL, D3DLOCK_READONLY | D3DLOCK_NOSYSLOCK | D3DLOCK_NO_DIRTY_UPDATE)))
{
int nSize = desc.Width * desc.Height * 3;
BYTE* pixels = new BYTE[nSize +1];
int iSrcPitch = lockedSrcRect.Pitch;
BYTE* pSrcRow = (BYTE*)lockedSrcRect.pBits;
LPBYTE lpDest = pixels;
LPDWORD lpSrc;
switch (desc.Format)
{
case D3DFMT_A8R8G8B8:
case D3DFMT_X8R8G8B8:
for (int y = desc.Height - 1; y >= 0; y--)
{
lpSrc = reinterpret_cast<LPDWORD>(lockedSrcRect.pBits) + y * desc.Width;
for (unsigned int x = 0; x < desc.Width; x++)
{
*reinterpret_cast<LPDWORD>(lpDest) = *lpSrc;
lpSrc++; // increment source pointer by 1 DWORD
lpDest += 3; // increment destination pointer by 3 bytes
}
}
break;
default:
ZeroMemory(pixels, nSize);
}
pd3dsTemp->UnlockRect();
BITMAPINFOHEADER header;
header.biWidth = desc.Width;
header.biHeight = desc.Height;
header.biSizeImage = nSize;
header.biSize = sizeof(BITMAPINFOHEADER);
header.biPlanes = 1;
header.biBitCount = 3 * 8; // RGB
header.biCompression = 0;
header.biXPelsPerMeter = 0;
header.biYPelsPerMeter = 0;
header.biClrUsed = 0;
header.biClrImportant = 0;
BITMAPFILEHEADER bfh = {0};
bfh.bfType = 0x4d42;
bfh.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER);
bfh.bfSize = bfh.bfOffBits + nSize;
unsigned int rough_size = sizeof(BITMAPINFOHEADER) + sizeof(BITMAPFILEHEADER) + nSize;
unsigned char* p = new unsigned char[rough_size]
memcpy(p, &bfh, sizeof(BITMAPFILEHEADER));
p += sizeof(BITMAPFILEHEADER);
memcpy(p, &header, sizeof(BITMAPINFOHEADER));
p += sizeof(BITMAPINFOHEADER);
memcpy(p, pixels, nSize);
delete [] pixels;
/**********************************************/
// p now has a full BMP file, write it out here
}
}
pd3dsTemp->Release();
}
pd3dsBack->Release();
}
}
Turns out that it was a combination of a bug in my code and Paint being more forgiving than Photoshop when it comes to reading files. The bug in my code caused the files to be saved with the wrong extension (i.e. Image.bmp was actually saved using D3DXIFF_JPG). When opening a file that contained a JPG image, but had a BMP extension, Photoshop just failed the file. I guess Paint worked since it ignored the file extension and just decoded the file contents.
Looking at a file in an image meta viewer helped me to see the problem.

Resources