Xamarin: Unable to extract central square of photo in Android/iOS - xamarin

I am trying to get an image using the camera. The image is to be 256x256 and I want it to come from the centre of a photo taken using the camera on a phone. I found this code at: https://forums.xamarin.com/discussion/37647/cross-platform-crop-image-view
I am using this code for Android...
public byte[] CropPhoto(byte[] photoToCropBytes, Rectangle rectangleToCrop, double outputWidth, double outputHeight)
{
using (var photoOutputStream = new MemoryStream())
{
// Load the bitmap
var inSampleSize = CalculateInSampleSize((int)rectangleToCrop.Width, (int)rectangleToCrop.Height, (int)outputWidth, (int)outputHeight);
var options = new BitmapFactory.Options();
options.InSampleSize = inSampleSize;
//options.InPurgeable = true; see http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html
using (var photoToCropBitmap = BitmapFactory.DecodeByteArray(photoToCropBytes, 0, photoToCropBytes.Length, options))
{
var matrix = new Matrix();
var martixScale = outputWidth / rectangleToCrop.Width * inSampleSize;
matrix.PostScale((float)martixScale, (float)martixScale);
using (var photoCroppedBitmap = Bitmap.CreateBitmap(photoToCropBitmap, (int)(rectangleToCrop.X / inSampleSize), (int)(rectangleToCrop.Y / inSampleSize), (int)(rectangleToCrop.Width / inSampleSize), (int)(rectangleToCrop.Height / inSampleSize), matrix, true))
{
photoCroppedBitmap.Compress(Bitmap.CompressFormat.Jpeg, 100, photoOutputStream);
}
}
return photoOutputStream.ToArray();
}
}
public static int CalculateInSampleSize(int inputWidth, int inputHeight, int outputWidth, int outputHeight)
{
//see http://developer.android.com/training/displaying-bitmaps/load-bitmap.html
int inSampleSize = 1; //default
if (inputHeight > outputHeight || inputWidth > outputWidth) {
int halfHeight = inputHeight / 2;
int halfWidth = inputWidth / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) > outputHeight && (halfWidth / inSampleSize) > outputWidth)
{
inSampleSize *= 2;
}
}
return inSampleSize;
}
and this code for iOS...
public byte[] CropPhoto(byte[] photoToCropBytes, Xamarin.Forms.Rectangle
rectangleToCrop, double outputWidth, double outputHeight)
{
byte[] photoOutputBytes;
using (var data = NSData.FromArray(photoToCropBytes))
{
using (var photoToCropCGImage = UIImage.LoadFromData(data).CGImage)
{
//crop image
using (var photoCroppedCGImage = photoToCropCGImage.WithImageInRect(new CGRect((nfloat)rectangleToCrop.X, (nfloat)rectangleToCrop.Y, (nfloat)rectangleToCrop.Width, (nfloat)rectangleToCrop.Height)))
{
using (var photoCroppedUIImage = UIImage.FromImage(photoCroppedCGImage))
{
//create a 24bit RGB image to the output size
using (var cGBitmapContext = new CGBitmapContext(IntPtr.Zero, (int)outputWidth, (int)outputHeight, 8, (int)(4 * outputWidth), CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
var photoOutputRectangleF = new RectangleF(0f, 0f, (float)outputWidth, (float)outputHeight);
// draw the cropped photo resized
cGBitmapContext.DrawImage(photoOutputRectangleF, photoCroppedUIImage.CGImage);
//get cropped resized photo
var photoOutputUIImage = UIKit.UIImage.FromImage(cGBitmapContext.ToImage());
//convert cropped resized photo to bytes and then stream
using (var photoOutputNsData = photoOutputUIImage.AsJPEG())
{
photoOutputBytes = new Byte[photoOutputNsData.Length];
System.Runtime.InteropServices.Marshal.Copy(photoOutputNsData.Bytes, photoOutputBytes, 0, Convert.ToInt32(photoOutputNsData.Length));
}
}
}
}
}
}
return photoOutputBytes;
}
I am struggling to work out exactly what the parameters are to call the function.
Currently, I am doing the following:
double cropSize = Math.Min(DeviceDisplay.MainDisplayInfo.Width, DeviceDisplay.MainDisplayInfo.Height);
double left = (DeviceDisplay.MainDisplayInfo.Width - cropSize) / 2.0;
double top = (DeviceDisplay.MainDisplayInfo.Height - cropSize) / 2.0;
// Get a square resized and cropped from the top image as a byte[]
_imageData = mediaService.CropPhoto(_imageData, new Rectangle(left, top, cropSize, cropSize), 256, 256);
I was expecting this to crop the image to the central square (in portrait mode side length would be the width of the photo) and then scale it down to a 256x256 image. But it never picks the centre of the image.
Has anyone ever used this code and can tell me what I need to pass in for the 'rectangleToCrop' parameter?
Note: Both Android and iOS give the same image, just not the central part that I was expecting.

Here are the two routines I used:
Android:
public byte[] ResizeImageAndCropToSquare(byte[] rawPhoto, int outputSize)
{
// Create object of bitmapfactory's option method for further option use
BitmapFactory.Options options = new BitmapFactory.Options();
// InPurgeable is used to free up memory while required
options.InPurgeable = true;
// Get the original image
using (var originalImage = BitmapFactory.DecodeByteArray(rawPhoto, 0, rawPhoto.Length, options))
{
// The shortest edge will determine the size of the square image
int cropSize = Math.Min(originalImage.Width, originalImage.Height);
int left = (originalImage.Width - cropSize) / 2;
int top = (originalImage.Height - cropSize) / 2;
using (var squareImage = Bitmap.CreateBitmap(originalImage, left, top, cropSize, cropSize))
{
// Resize the square image to the correct size of an Avatar
using (var resizedImage = Bitmap.CreateScaledBitmap(squareImage, outputSize, outputSize, true))
{
// Return the raw data of the resized image
using (MemoryStream resizedImageStream = new MemoryStream())
{
// Resize the image maintaining 100% quality
resizedImage.Compress(Bitmap.CompressFormat.Png, 100, resizedImageStream);
return resizedImageStream.ToArray();
}
}
}
}
}
iOS:
private const int BitsPerComponent = 8;
public byte[] ResizeImageAndCropToSquare(byte[] rawPhoto, int outputSize)
{
using (var data = NSData.FromArray(rawPhoto))
{
using (var photoToCrop = UIImage.LoadFromData(data).CGImage)
{
nint photoWidth = photoToCrop.Width;
nint photoHeight = photoToCrop.Height;
nint cropSize = photoWidth < photoHeight ? photoWidth : photoHeight;
nint left = (photoWidth - cropSize) / 2;
nint top = (photoHeight - cropSize) / 2;
// Crop image
using (var photoCropped = photoToCrop.WithImageInRect(new CGRect(left, top, cropSize, cropSize)))
{
using (var photoCroppedUIImage = UIImage.FromImage(photoCropped))
{
// Create a 24bit RGB image of output size
using (var cGBitmapContext = new CGBitmapContext(IntPtr.Zero, outputSize, outputSize, BitsPerComponent, outputSize << 2, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
var photoOutputRectangleF = new RectangleF(0f, 0f, outputSize, outputSize);
// Draw the cropped photo resized
cGBitmapContext.DrawImage(photoOutputRectangleF, photoCroppedUIImage.CGImage);
// Get cropped resized photo
var photoOutputUIImage = UIImage.FromImage(cGBitmapContext.ToImage());
// Convert cropped resized photo to bytes and then stream
using (var photoOutputNsData = photoOutputUIImage.AsPNG())
{
var rawOutput = new byte[photoOutputNsData.Length];
Marshal.Copy(photoOutputNsData.Bytes, rawOutput, 0, Convert.ToInt32(photoOutputNsData.Length));
return rawOutput;
}
}
}
}
}
}
}

Related

.NET6 Create Thumbnail Image of 8MB Image Error

I am trying to compress a large image into a thumbnail of 600X600 in .NET 6
My code is
public static string CreateThumbnail(int maxWidth, int maxHeight, string path)
{
byte[] bytes = System.IO.File.ReadAllBytes(path);
using (System.IO.MemoryStream ms = new System.IO.MemoryStream(bytes))
{
Image image = Image.FromStream(ms);
return CreateThumbnail(maxWidth, maxHeight, image, path);
}
}
I am getting error on this line
Image image = Image.FromStream(ms);
error is System.Runtime.InteropServices.ExternalException: 'A generic error occurred in GDI+.'
Image size in 8mb, Code works fine for small images. What is the problem in code or is there any better way to create a thumbnail for large images?
Create thumbnail has this code but I get error before calling this function
private static string CreateThumbnail(int maxWidth, int maxHeight, Image image, string path)
{
//var image = System.Drawing.Image.FromStream( (path);
var ratioX = (double)maxWidth / image.Width;
var ratioY = (double)maxHeight / image.Height;
var ratio = Math.Min(ratioX, ratioY);
var newWidth = (int)(image.Width * ratio);
var newHeight = (int)(image.Height * ratio);
using (var newImage = new Bitmap(newWidth, newHeight))
{
using (Graphics thumbGraph = Graphics.FromImage(newImage))
{
thumbGraph.CompositingQuality = CompositingQuality.Default;
thumbGraph.SmoothingMode = SmoothingMode.Default;
//thumbGraph.InterpolationMode = InterpolationMode.HighQualityBicubic;
thumbGraph.DrawImage(image, 0, 0, newWidth, newHeight);
image.Dispose();
//string fileRelativePath = Path.GetFileName(path);
//newImage.Save(path, newImage.RawFormat);
SaveJpeg(path, newImage, 100);
}
}
return path;
}

How to remove a selected part of an image in Flutter?

I want to send an image to backend, but before sending user can select some part of it. And I need to remove unwanted part. Below I attached some screenshots.
After three days of working on it I finally found out, how to work around.
So if it's not an eraser I'm just drawing as always but if an eraser I'm replacing Paint with shader wich is my image from background.
if (!_eraser) {
paint = new Paint()
..style = PaintingStyle.fill
..strokeCap = StrokeCap.round
..color = selectedColor
..strokeWidth = strokeWidth;
} else {
final Float64List deviceTransform = new Float64List(16)
..[0] = devicePixelRatio
..[5] = devicePixelRatio
..[10] = 1.0
..[15] = 2.0;
paint = new Paint()
..style = PaintingStyle.fill
..strokeCap = StrokeCap.round
..shader = ImageShader(image, TileMode.repeated, TileMode.repeated, deviceTransform)
..strokeWidth = strokeWidth;
Painting Class
#override
void paint(Canvas canvas, Size size) {
canvas.drawImage(image, Offset.zero, new Paint());
for (int i = 0; i < pointsList.length - 1; i++) {
if (pointsList[i] != null && pointsList[i + 1] != null) {
canvas.drawLine(pointsList[i].points, pointsList[i + 1].points, pointsList[i].paint);
} else if (pointsList[i] != null && pointsList[i + 1] == null) {
offsetPoints.clear();
offsetPoints.add(pointsList[i].points);
offsetPoints.add(Offset(pointsList[i].points.dx + 0.1, pointsList[i].points.dy + 0.1));
canvas.drawCircle(pointsList[i].points, pointsList[i].paint.strokeWidth / 2, pointsList[i].paint);
}
}
}
If you want to crop the image basic on some user input that you already have, you could use the copyCrop function from the image library.

ClipBoard What is an Object Descriptor type and how can i write it?

So I made a small application that basicaly draw a whatever image is in the ClipBoard(memory) and trys to draw it.
This is a sample of the code:
private EventHandler<KeyEvent> copyPasteEvent = new EventHandler() {
final KeyCombination ctrl_V = new KeyCodeCombination(KeyCode.V, KeyCombination.CONTROL_DOWN);
#Override
public void handle(Event event) {
if (ctrl_V.match((KeyEvent) event)) {
System.out.println("Ctrl+V pressed");
Clipboard clipboard = Clipboard.getSystemClipboard();
System.out.println(clipboard.getContentTypes());
//Change canvas size if necessary to allow space for the image to fit
Image copiedImage = clipboard.getImage();
if (copiedImage.getHeight()>canvas.getHeight()){
canvas.setHeight(copiedImage.getHeight());
}
if (copiedImage.getWidth()>canvas.getWidth()){
canvas.setWidth(copiedImage.getWidth());
}
gc.drawImage(clipboard.getImage(), 0,0);
}
}
};
This is the image that was drawn and the correspecting data type:
A print from my screen.
A image from the internet.
However when i copy and paste a direct raw image from paint...
Object Descriptor is an OLE format from Microsoft.
This is why when you copy an image from a Microsoft application, you get these descriptors from Clipboard.getSystemClipboard().getContentTypes():
[[application/x-java-rawimage], [Object Descriptor]]
As for getting the image out of the clipboard... let's try two possible ways to do it: AWT and JavaFX.
AWT
Let's use the awt toolkit to get the system clipboard, and in case we have an image on it, retrieve a BufferedImage. Then we can convert it easily to a JavaFX Image and place it in an ImageView:
try {
DataFlavor[] availableDataFlavors = Toolkit.getDefaultToolkit().
getSystemClipboard().getAvailableDataFlavors();
for (DataFlavor f : availableDataFlavors) {
System.out.println("AWT Flavor: " + f);
if (f.equals(DataFlavor.imageFlavor)) {
BufferedImage data = (BufferedImage) Toolkit.getDefaultToolkit().getSystemClipboard().getData(DataFlavor.imageFlavor);
System.out.println("data " + data);
// Convert to JavaFX:
WritableImage img = new WritableImage(data.getWidth(), data.getHeight());
SwingFXUtils.toFXImage((BufferedImage) data, img);
imageView.setImage(img);
}
}
} catch (UnsupportedFlavorException | IOException ex) {
System.out.println("Error " + ex);
}
It prints:
AWT Flavor: java.awt.datatransfer.DataFlavor[mimetype=image/x-java-image;representationclass=java.awt.Image]
data BufferedImage#3e4eca95: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0 IntegerInterleavedRaster: width = 350 height = 364 #Bands = 3 xOff = 0 yOff = 0 dataOffset[0] 0
and displays your image:
This part was based in this answer.
JavaFX
Why didn't we try it with JavaFX in the first place? Well, we could have tried directly:
Image content = (Image) Clipboard.getSystemClipboard().getContent(DataFormat.IMAGE);
imageView.setImage(content);
and you will get a valid image, but when adding it to an ImageView, it will be blank as you already noticed, or with invalid colors.
So how can we get a valid image? If you check the BufferedImage above, it shows type = 1, which means BufferedImage.TYPE_INT_RGB = 1;, in other words, it is an image with 8-bit RGB color components packed into integer pixels, without alpha component.
My guess is that JavaFX implementation for Windows doesn't process correctly this image format, as it probably expects a RGBA format. You can check here how the image is extracted. And if you want to dive into the native implementation, check the native-glass/win/GlassClipboard.cpp code.
So we can try to do it with a PixelReader. Let's read the image and return a byte array:
private byte[] imageToData(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] data = new byte[width * height * 3];
int i = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int argb = image.getPixelReader().getArgb(x, y);
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = argb & 0xFF;
data[i++] = (byte) r;
data[i++] = (byte) g;
data[i++] = (byte) b;
}
}
return data;
}
Now, all we need to do is use this byte array to write a new image and set it to the ImageView:
Image content = (Image) Clipboard.getSystemClipboard().getContent(DataFormat.IMAGE);
byte[] data = imageToData(content);
WritableImage writableImage = new WritableImage((int) content.getWidth(), (int) content.getHeight());
PixelWriter pixelWriter = writableImage.getPixelWriter();
pixelWriter.setPixels(0, 0, (int) content.getWidth(), (int) content.getHeight(),
PixelFormat.getByteRgbInstance(), data, 0, (int) content.getWidth() * 3);
imageView.setImage(writableImage);
And now you will get the same result, but only using JavaFX:

How to create barcode image with ZXing.Net and ImageSharp in .Net Core 2.0

I'm trying to generate a barcode image. When I use the following code I can create a base64 string but it's giving a blank image. I checked the content is not blank or white space.
There are codes using CoreCompat.System.Drawing but I couldn't make it work because I am working in OS X environment.
Am I doing something wrong?
code:
[HtmlTargetElement("barcode")]
public class BarcodeHelper: TagHelper {
public override void Process(TagHelperContext context, TagHelperOutput output) {
var content = context.AllAttributes["content"].Value.ToString();
var alt = context.AllAttributes["alt"].Value.ToString();
var width = 250;
var height = 250;
var margin = 0;
var barcodeWriter = new ZXing.BarcodeWriterPixelData {
Format = ZXing.BarcodeFormat.CODE_128,
Options = new QrCodeEncodingOptions {
Height = height, Width = width, Margin = margin
}
};
var pixelData = barcodeWriter.Write(content);
using (var image = Image.LoadPixelData<Rgba32>(pixelData.Pixels, width, height))
{
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", width);
output.Attributes.Add("height", height);
output.Attributes.Add("alt", alt);
output.Attributes.Add("src", string.Format("data:image/png;base64,{0}", image.ToBase64String(ImageFormats.Png)));
}
}
}
There are some code snippets like below. They can write the content and easily convert the result data to base64 string. But when I call BarcodeWriter it needs a type <TOutput> which I don't know what to send. I am using ZXing.Net 0.16.2.
var writer = BarcodeWriter // BarcodeWriter without <TOutput> is missing. There is BarcodeWriter<TOutput> I can call.
{
Format = BarcodeFormat.CODE_128
}
var result = writer.write("content");
The current version (0.16.2) of the pixel data renderer uses a wrong alpha channel value. The whole barcode is transparent.
Additionally with my version of ImageSharp I had to remove the following part "data:image/png;base64,{0}", because image.ToBase64String includes this already.
Complete modified code:
[HtmlTargetElement("barcode")]
public class BarcodeHelper: TagHelper {
public override void Process(TagHelperContext context, TagHelperOutput output) {
var content = context.AllAttributes["content"].Value.ToString();
var alt = context.AllAttributes["alt"].Value.ToString();
var width = 250;
var height = 250;
var margin = 0;
var barcodeWriter = new ZXing.BarcodeWriterPixelData {
Format = ZXing.BarcodeFormat.CODE_128,
Options = new EncodingOptions {
Height = height, Width = width, Margin = margin
},
Renderer = new PixelDataRenderer {
Foreground = new PixelDataRenderer.Color(unchecked((int)0xFF000000)),
Background = new PixelDataRenderer.Color(unchecked((int)0xFFFFFFFF)),
}
};
var pixelData = barcodeWriter.Write(content);
using (var image = Image.LoadPixelData<Rgba32>(pixelData.Pixels, width, height))
{
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", pixelData.Width);
output.Attributes.Add("height", pixelData.Height);
output.Attributes.Add("alt", alt);
output.Attributes.Add("src", string.Format( image.ToBase64String(ImageFormats.Png)));
}
}
}
It's also possible to use the ImageSharp binding package (ZXing.Net.Bindings.ImageSharp).
[HtmlTargetElement("barcode")]
public class BarcodeHelper: TagHelper {
public override void Process(TagHelperContext context, TagHelperOutput output) {
var content = context.AllAttributes["content"].Value.ToString();
var alt = context.AllAttributes["alt"].Value.ToString();
var width = 250;
var height = 250;
var margin = 0;
var barcodeWriter = new ZXing.ImageSharp.BarcodeWriter<Rgba32> {
Format = ZXing.BarcodeFormat.CODE_128,
Options = new EncodingOptions {
Height = height, Width = width, Margin = margin
}
};
using (var image = barcodeWriter.Write(content))
{
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", image.Width);
output.Attributes.Add("height", image.Height);
output.Attributes.Add("alt", alt);
output.Attributes.Add("src", string.Format( image.ToBase64String(ImageFormats.Png)));
}
}
}

Windows Phone - update live tile from background agent with custom image

I am trying to add cloud image to album cover if the cover is loaded from internet. I am trying to do this in Background Audio agent and I think I almost got it. The problem is that I have black image in tile. Few times when testing I got cover image with my cloud image in it but mostly I get black image (and sometimes black image with cloud in it).
Can anyone help me find the problem? Thanks
private void UpdateAppTile()
{
var apptile = ShellTile.ActiveTiles.First();
if (apptile != null && _playList != null && _playList.Any())
{
var track = _playList[currentTrackNumber];
var size = 360;
Uri coverUrl;
if (track.AlbumArt.OriginalString.StartsWith("http"))
{
BitmapImage img = null;
using (AutoResetEvent are = new AutoResetEvent(false))
{
string filename = Path.GetFileNameWithoutExtension(track.AlbumArt.OriginalString);
var urlToNewCover = String.Format("http://.../{0}/{1}", filename, size);
coverUrl = new Uri(urlToNewCover, UriKind.Absolute);
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
img = new BitmapImage(coverUrl);
are.Set();
});
are.WaitOne();
var wbmp = CreateTileImageWithCloud(img);
SaveTileImage(wbmp, "/shared/shellcontent/test.jpg");
coverUrl = new Uri("isostore:/shared/shellcontent/test.jpg", UriKind.RelativeOrAbsolute);
}
}
else
{
var coverId = track.Tag.Split(',')[1];
var urlToNewCover = String.Format("http://.../{0}/{1}", coverId, size);
coverUrl = new Uri(urlToNewCover, UriKind.Absolute);
}
var appTileData = new FlipTileData
{
BackgroundImage = coverUrl,
WideBackgroundImage = coverUrl,
...
}
apptile.Update(appTileData);
}
}
public static BitmapImage LoadBitmap(string iFilename)
{
Uri imgUri = new Uri(iFilename, UriKind.Relative);
StreamResourceInfo imageResource = Application.GetResourceStream(imgUri);
BitmapImage image = new BitmapImage();
image.SetSource(imageResource.Stream);
return image;
}
private void SaveTileImage(WriteableBitmap wbmp, string filename)
{
using (var store = IsolatedStorageFile.GetUserStoreForApplication())
{
if (store.FileExists(filename))
store.DeleteFile(filename);
var stream = store.OpenFile(filename, FileMode.OpenOrCreate);
wbmp.SaveJpeg(stream, wbmp.PixelWidth, wbmp.PixelHeight, 100, 100);
stream.Close();
}
}
private WriteableBitmap CreateTileImageWithCloud(BitmapImage img)
{
Image image = null;
WriteableBitmap wbmp = null;
using (AutoResetEvent are = new AutoResetEvent(false))
{
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
image = new Image { Source = img };
Canvas.SetLeft(image, 0);
Canvas.SetTop(image, 0);
var cloud = new BitmapImage(new Uri("Assets/Images/Other/Cloud_no.png", UriKind.Relative));
var cloudImg = new Image { Source = cloud };
Canvas.SetLeft(cloudImg, 125);
Canvas.SetTop(cloudImg, 10);
var canvas = new Canvas
{
Height = 176,
Width = 176
};
canvas.Children.Add(image);
canvas.Children.Add(cloudImg);
wbmp = new WriteableBitmap(176, 176);
wbmp.Render(canvas, null);
wbmp.Invalidate();
are.Set();
});
are.WaitOne();
}
return wbmp;
}
Edit
I found little pattern in which this is working and in which not. When application is running and I called this twice (in TrackReady and SkipNext) then I very often get cover image with cloud. When I am running just background agent (without running app) I get always black image. And often first UpdateAppTile call is just black image and second it's black image with cloud. That black color is default canvas background so I guess I have problems with delay when loading cover image from url. But I am not sure how in my case use ImageOpened event and if it help.
I think that you should call Measure and Arrange after adding elements to canvas and before rendering canvas (as with other UIElements):
canvas.Measure( new Size( Width, Height ) );
canvas.Arrange( new Rect( 0, 0, Width, Height ) );

Resources