Is there a way to get an image's rgb matrix representation? and vice versa? I would like to perform image masking/filtering on the original image so it needs to be applied to its rgb matrix representation. Currently using this library to get an image from a device: https://pub.dartlang.org/packages/image_picker
https://pub.dartlang.org/packages/image provides image conversion and manipulation utility functions.
import 'dart:ui' as ui;
import 'package:flutter/material.dart';
// for a local iamge example
List RGBAList;
// 1. get [ImageProvider] instance
// [ExactAssetImage] extends [AssetBundleImageProvider] extends [ImageProvider]
ExactAssetImage provider = ExactAssetImage('$local_img_uri');
// 2. get [ui.Image] by [ImageProvider]
ImageStream stream = provider.resolve(ImageConfiguration.empty);
Completer completer = Completer<ui.Image>();
ImageStreamListener listener = ImageStreamListener((frame, sync) {
ui.Image image = frame.image;
completer.complete(image);
stream.removeListener(listener);
})
stream.addListener(listener);
// 3. get rgba array/list by [ui.Image]
completer.then((ui.Image image) {
image.toByteData(format: ui.ImageByteFormat.rowRgba).then((ByteData data) {
RGBAList = data.buffer.asUint8List().toList();
});
})
You can use this package - https://pub.dartlang.org/packages/image
import 'package:image/image.dart' as Imagi;
Here's how to use it to obtain RGB matrix of a file from ImagePicker()-
final image = await ImagePicker().pickImage(source: ImageSource.gallery);
if (image == null) return;
final imageTemp = File(image.path);
controlImage = imageTemp;
Now here's a function for obtaining the RGB matrix -
List<List<int>> imgArray = [];
void readImage() async{
final bytes = await controlImage!.readAsBytes();
final decoder = Imagi.JpegDecoder();
final decodedImg = decoder.decodeImage(bytes);
final decodedBytes = decodedImg!.getBytes(format: Imagi.Format.rgb);
// print(decodedBytes);
print(decodedBytes.length);
// int loopLimit = decodedImg.width;
int loopLimit =1000;
for(int x = 0; x < loopLimit; x++) {
int red = decodedBytes[decodedImg.width*3 + x*3];
int green = decodedBytes[decodedImg.width*3 + x*3 + 1];
int blue = decodedBytes[decodedImg.width*3 + x*3 + 2];
imgArray.add([red, green, blue]);
}
print(imgArray);
}
The array imgArray will contain the RGB matrix
Related
I am trying to compress a large image into a thumbnail of 600X600 in .NET 6
My code is
public static string CreateThumbnail(int maxWidth, int maxHeight, string path)
{
byte[] bytes = System.IO.File.ReadAllBytes(path);
using (System.IO.MemoryStream ms = new System.IO.MemoryStream(bytes))
{
Image image = Image.FromStream(ms);
return CreateThumbnail(maxWidth, maxHeight, image, path);
}
}
I am getting error on this line
Image image = Image.FromStream(ms);
error is System.Runtime.InteropServices.ExternalException: 'A generic error occurred in GDI+.'
Image size in 8mb, Code works fine for small images. What is the problem in code or is there any better way to create a thumbnail for large images?
Create thumbnail has this code but I get error before calling this function
private static string CreateThumbnail(int maxWidth, int maxHeight, Image image, string path)
{
//var image = System.Drawing.Image.FromStream( (path);
var ratioX = (double)maxWidth / image.Width;
var ratioY = (double)maxHeight / image.Height;
var ratio = Math.Min(ratioX, ratioY);
var newWidth = (int)(image.Width * ratio);
var newHeight = (int)(image.Height * ratio);
using (var newImage = new Bitmap(newWidth, newHeight))
{
using (Graphics thumbGraph = Graphics.FromImage(newImage))
{
thumbGraph.CompositingQuality = CompositingQuality.Default;
thumbGraph.SmoothingMode = SmoothingMode.Default;
//thumbGraph.InterpolationMode = InterpolationMode.HighQualityBicubic;
thumbGraph.DrawImage(image, 0, 0, newWidth, newHeight);
image.Dispose();
//string fileRelativePath = Path.GetFileName(path);
//newImage.Save(path, newImage.RawFormat);
SaveJpeg(path, newImage, 100);
}
}
return path;
}
I have this code to extract color values from an image by Flutter, but in this code use the Uri image.
I need the User he input image instead of the link.
I well use image_picker, but I don't know how I can Connection between codes:
import 'package:http/http.dart';
import 'package:image/image.dart';
Future<void> main() async {
final url ='https://github.githubassets.com/images/modules/open_graph/github-octocat.png';
final resp = await get(Uri.parse(url)); //Download the image data from the url
final img = resp.bodyBytes;
final decodedImg = decodeImage(img); //Decode the received image data
if (decodedImg == null) {
throw 'Invalid image';
}
final bytesList = decodedImg.data;
final colorList = bytesList
.map<Color>((e) => Color(e))
.toList(); //Map the decoded data to colors
//Change format to a 2d list of colors so that they can be accessed as colorGrid[x][y]
final List<List<Color>> colorGrid = [];
for (int x = 0; x < decodedImg.width; x++) {
colorGrid.add([]);
for (int y = 0; y < decodedImg.height; y++) {
colorGrid[x].add(colorList[x + y * decodedImg.width]);
}
}
print(colorGrid);
}
/// Stores RGBA values
class Color {
final int alpha, blue, green, red;
Color(int abgr)
: alpha = abgr >> 24 & 0xFF,
blue = abgr >> 16 & 0xFF,
green = abgr >> 8 & 0xFF,
red = abgr & 0xFF;
#override
String toString() {
return 'R: $red, G: $green, B: $blue, A: $alpha';
}
}
I think image_picker will be best for you where you can take a pic from the camera or gallery
use image_picker
https://pub.dev/packages/image_picker
I am trying to get an image using the camera. The image is to be 256x256 and I want it to come from the centre of a photo taken using the camera on a phone. I found this code at: https://forums.xamarin.com/discussion/37647/cross-platform-crop-image-view
I am using this code for Android...
public byte[] CropPhoto(byte[] photoToCropBytes, Rectangle rectangleToCrop, double outputWidth, double outputHeight)
{
using (var photoOutputStream = new MemoryStream())
{
// Load the bitmap
var inSampleSize = CalculateInSampleSize((int)rectangleToCrop.Width, (int)rectangleToCrop.Height, (int)outputWidth, (int)outputHeight);
var options = new BitmapFactory.Options();
options.InSampleSize = inSampleSize;
//options.InPurgeable = true; see http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html
using (var photoToCropBitmap = BitmapFactory.DecodeByteArray(photoToCropBytes, 0, photoToCropBytes.Length, options))
{
var matrix = new Matrix();
var martixScale = outputWidth / rectangleToCrop.Width * inSampleSize;
matrix.PostScale((float)martixScale, (float)martixScale);
using (var photoCroppedBitmap = Bitmap.CreateBitmap(photoToCropBitmap, (int)(rectangleToCrop.X / inSampleSize), (int)(rectangleToCrop.Y / inSampleSize), (int)(rectangleToCrop.Width / inSampleSize), (int)(rectangleToCrop.Height / inSampleSize), matrix, true))
{
photoCroppedBitmap.Compress(Bitmap.CompressFormat.Jpeg, 100, photoOutputStream);
}
}
return photoOutputStream.ToArray();
}
}
public static int CalculateInSampleSize(int inputWidth, int inputHeight, int outputWidth, int outputHeight)
{
//see http://developer.android.com/training/displaying-bitmaps/load-bitmap.html
int inSampleSize = 1; //default
if (inputHeight > outputHeight || inputWidth > outputWidth) {
int halfHeight = inputHeight / 2;
int halfWidth = inputWidth / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) > outputHeight && (halfWidth / inSampleSize) > outputWidth)
{
inSampleSize *= 2;
}
}
return inSampleSize;
}
and this code for iOS...
public byte[] CropPhoto(byte[] photoToCropBytes, Xamarin.Forms.Rectangle
rectangleToCrop, double outputWidth, double outputHeight)
{
byte[] photoOutputBytes;
using (var data = NSData.FromArray(photoToCropBytes))
{
using (var photoToCropCGImage = UIImage.LoadFromData(data).CGImage)
{
//crop image
using (var photoCroppedCGImage = photoToCropCGImage.WithImageInRect(new CGRect((nfloat)rectangleToCrop.X, (nfloat)rectangleToCrop.Y, (nfloat)rectangleToCrop.Width, (nfloat)rectangleToCrop.Height)))
{
using (var photoCroppedUIImage = UIImage.FromImage(photoCroppedCGImage))
{
//create a 24bit RGB image to the output size
using (var cGBitmapContext = new CGBitmapContext(IntPtr.Zero, (int)outputWidth, (int)outputHeight, 8, (int)(4 * outputWidth), CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
var photoOutputRectangleF = new RectangleF(0f, 0f, (float)outputWidth, (float)outputHeight);
// draw the cropped photo resized
cGBitmapContext.DrawImage(photoOutputRectangleF, photoCroppedUIImage.CGImage);
//get cropped resized photo
var photoOutputUIImage = UIKit.UIImage.FromImage(cGBitmapContext.ToImage());
//convert cropped resized photo to bytes and then stream
using (var photoOutputNsData = photoOutputUIImage.AsJPEG())
{
photoOutputBytes = new Byte[photoOutputNsData.Length];
System.Runtime.InteropServices.Marshal.Copy(photoOutputNsData.Bytes, photoOutputBytes, 0, Convert.ToInt32(photoOutputNsData.Length));
}
}
}
}
}
}
return photoOutputBytes;
}
I am struggling to work out exactly what the parameters are to call the function.
Currently, I am doing the following:
double cropSize = Math.Min(DeviceDisplay.MainDisplayInfo.Width, DeviceDisplay.MainDisplayInfo.Height);
double left = (DeviceDisplay.MainDisplayInfo.Width - cropSize) / 2.0;
double top = (DeviceDisplay.MainDisplayInfo.Height - cropSize) / 2.0;
// Get a square resized and cropped from the top image as a byte[]
_imageData = mediaService.CropPhoto(_imageData, new Rectangle(left, top, cropSize, cropSize), 256, 256);
I was expecting this to crop the image to the central square (in portrait mode side length would be the width of the photo) and then scale it down to a 256x256 image. But it never picks the centre of the image.
Has anyone ever used this code and can tell me what I need to pass in for the 'rectangleToCrop' parameter?
Note: Both Android and iOS give the same image, just not the central part that I was expecting.
Here are the two routines I used:
Android:
public byte[] ResizeImageAndCropToSquare(byte[] rawPhoto, int outputSize)
{
// Create object of bitmapfactory's option method for further option use
BitmapFactory.Options options = new BitmapFactory.Options();
// InPurgeable is used to free up memory while required
options.InPurgeable = true;
// Get the original image
using (var originalImage = BitmapFactory.DecodeByteArray(rawPhoto, 0, rawPhoto.Length, options))
{
// The shortest edge will determine the size of the square image
int cropSize = Math.Min(originalImage.Width, originalImage.Height);
int left = (originalImage.Width - cropSize) / 2;
int top = (originalImage.Height - cropSize) / 2;
using (var squareImage = Bitmap.CreateBitmap(originalImage, left, top, cropSize, cropSize))
{
// Resize the square image to the correct size of an Avatar
using (var resizedImage = Bitmap.CreateScaledBitmap(squareImage, outputSize, outputSize, true))
{
// Return the raw data of the resized image
using (MemoryStream resizedImageStream = new MemoryStream())
{
// Resize the image maintaining 100% quality
resizedImage.Compress(Bitmap.CompressFormat.Png, 100, resizedImageStream);
return resizedImageStream.ToArray();
}
}
}
}
}
iOS:
private const int BitsPerComponent = 8;
public byte[] ResizeImageAndCropToSquare(byte[] rawPhoto, int outputSize)
{
using (var data = NSData.FromArray(rawPhoto))
{
using (var photoToCrop = UIImage.LoadFromData(data).CGImage)
{
nint photoWidth = photoToCrop.Width;
nint photoHeight = photoToCrop.Height;
nint cropSize = photoWidth < photoHeight ? photoWidth : photoHeight;
nint left = (photoWidth - cropSize) / 2;
nint top = (photoHeight - cropSize) / 2;
// Crop image
using (var photoCropped = photoToCrop.WithImageInRect(new CGRect(left, top, cropSize, cropSize)))
{
using (var photoCroppedUIImage = UIImage.FromImage(photoCropped))
{
// Create a 24bit RGB image of output size
using (var cGBitmapContext = new CGBitmapContext(IntPtr.Zero, outputSize, outputSize, BitsPerComponent, outputSize << 2, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
var photoOutputRectangleF = new RectangleF(0f, 0f, outputSize, outputSize);
// Draw the cropped photo resized
cGBitmapContext.DrawImage(photoOutputRectangleF, photoCroppedUIImage.CGImage);
// Get cropped resized photo
var photoOutputUIImage = UIImage.FromImage(cGBitmapContext.ToImage());
// Convert cropped resized photo to bytes and then stream
using (var photoOutputNsData = photoOutputUIImage.AsPNG())
{
var rawOutput = new byte[photoOutputNsData.Length];
Marshal.Copy(photoOutputNsData.Bytes, rawOutput, 0, Convert.ToInt32(photoOutputNsData.Length));
return rawOutput;
}
}
}
}
}
}
}
So I made a small application that basicaly draw a whatever image is in the ClipBoard(memory) and trys to draw it.
This is a sample of the code:
private EventHandler<KeyEvent> copyPasteEvent = new EventHandler() {
final KeyCombination ctrl_V = new KeyCodeCombination(KeyCode.V, KeyCombination.CONTROL_DOWN);
#Override
public void handle(Event event) {
if (ctrl_V.match((KeyEvent) event)) {
System.out.println("Ctrl+V pressed");
Clipboard clipboard = Clipboard.getSystemClipboard();
System.out.println(clipboard.getContentTypes());
//Change canvas size if necessary to allow space for the image to fit
Image copiedImage = clipboard.getImage();
if (copiedImage.getHeight()>canvas.getHeight()){
canvas.setHeight(copiedImage.getHeight());
}
if (copiedImage.getWidth()>canvas.getWidth()){
canvas.setWidth(copiedImage.getWidth());
}
gc.drawImage(clipboard.getImage(), 0,0);
}
}
};
This is the image that was drawn and the correspecting data type:
A print from my screen.
A image from the internet.
However when i copy and paste a direct raw image from paint...
Object Descriptor is an OLE format from Microsoft.
This is why when you copy an image from a Microsoft application, you get these descriptors from Clipboard.getSystemClipboard().getContentTypes():
[[application/x-java-rawimage], [Object Descriptor]]
As for getting the image out of the clipboard... let's try two possible ways to do it: AWT and JavaFX.
AWT
Let's use the awt toolkit to get the system clipboard, and in case we have an image on it, retrieve a BufferedImage. Then we can convert it easily to a JavaFX Image and place it in an ImageView:
try {
DataFlavor[] availableDataFlavors = Toolkit.getDefaultToolkit().
getSystemClipboard().getAvailableDataFlavors();
for (DataFlavor f : availableDataFlavors) {
System.out.println("AWT Flavor: " + f);
if (f.equals(DataFlavor.imageFlavor)) {
BufferedImage data = (BufferedImage) Toolkit.getDefaultToolkit().getSystemClipboard().getData(DataFlavor.imageFlavor);
System.out.println("data " + data);
// Convert to JavaFX:
WritableImage img = new WritableImage(data.getWidth(), data.getHeight());
SwingFXUtils.toFXImage((BufferedImage) data, img);
imageView.setImage(img);
}
}
} catch (UnsupportedFlavorException | IOException ex) {
System.out.println("Error " + ex);
}
It prints:
AWT Flavor: java.awt.datatransfer.DataFlavor[mimetype=image/x-java-image;representationclass=java.awt.Image]
data BufferedImage#3e4eca95: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0 IntegerInterleavedRaster: width = 350 height = 364 #Bands = 3 xOff = 0 yOff = 0 dataOffset[0] 0
and displays your image:
This part was based in this answer.
JavaFX
Why didn't we try it with JavaFX in the first place? Well, we could have tried directly:
Image content = (Image) Clipboard.getSystemClipboard().getContent(DataFormat.IMAGE);
imageView.setImage(content);
and you will get a valid image, but when adding it to an ImageView, it will be blank as you already noticed, or with invalid colors.
So how can we get a valid image? If you check the BufferedImage above, it shows type = 1, which means BufferedImage.TYPE_INT_RGB = 1;, in other words, it is an image with 8-bit RGB color components packed into integer pixels, without alpha component.
My guess is that JavaFX implementation for Windows doesn't process correctly this image format, as it probably expects a RGBA format. You can check here how the image is extracted. And if you want to dive into the native implementation, check the native-glass/win/GlassClipboard.cpp code.
So we can try to do it with a PixelReader. Let's read the image and return a byte array:
private byte[] imageToData(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] data = new byte[width * height * 3];
int i = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int argb = image.getPixelReader().getArgb(x, y);
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = argb & 0xFF;
data[i++] = (byte) r;
data[i++] = (byte) g;
data[i++] = (byte) b;
}
}
return data;
}
Now, all we need to do is use this byte array to write a new image and set it to the ImageView:
Image content = (Image) Clipboard.getSystemClipboard().getContent(DataFormat.IMAGE);
byte[] data = imageToData(content);
WritableImage writableImage = new WritableImage((int) content.getWidth(), (int) content.getHeight());
PixelWriter pixelWriter = writableImage.getPixelWriter();
pixelWriter.setPixels(0, 0, (int) content.getWidth(), (int) content.getHeight(),
PixelFormat.getByteRgbInstance(), data, 0, (int) content.getWidth() * 3);
imageView.setImage(writableImage);
And now you will get the same result, but only using JavaFX:
I have my code That consist: I open the images that I want to upload, then I convert it in grayscale and later in binary image. But I have a question. How do I get values (0,1) of binary image in order to create a matrix with that values with emgucv c#??
OpenFileDialog Openfile = new OpenFileDialog();
if (Openfile.ShowDialog() == DialogResult.OK)
{
Image<Gray, Byte> My_Image = new Image<Gray, byte>(Openfile.FileName);
pictureBox1.Image = My_Image.ToBitmap();
My_Image = My_Image.ThresholdBinary(new Gray(69), new Gray(255));
pictureBox2.Image = My_Image.ToBitmap();
}
}
Second Post
I think I misunderstood the question, sorry for giving wrong info. But I think you may get some understanding from this post? Work with matrix in emgu cv
First post
By passing your My_Image which is result after ThresholdBinary() to following function, you can have array of zero and one only about the binary image.
public int[] ZeroOneArray(Image<Gray, byte> binaryImage)
{
//get the bytes value after Image.ThresholdBinary()
//either 0 or 255
byte[] imageBytes = binaryImage.Bytes;
//change 255 to 1 and remain 0
var binary_0_Or_255 = from byteInt in imageBytes select byteInt / 255;
//convert to array
int[] arrayOnlyOneOrZero = binary_0_Or_255.ToArray();
//checking the array content
foreach (var bin in arrayOnlyOneOrZero)
{
Console.WriteLine(bin);
}
return arrayOnlyOneOrZero;
}
Is this what you want? thanks
Third Post
By understanding this, chris answer in error copying image to array, I write a function for you to transfer your gray binary image to gray matrix image
public Image<Gray, double> GrayBinaryImageToMatrixImage(Image<Gray, byte> binaryImage)
{
byte[] imageBytes = binaryImage.Bytes;
Image<Gray, double> gray_image_div = new Image<Gray, double>(binaryImage.Size);//empty image method one
//or
Image<Gray, double> gray_image_div_II = binaryImage.Convert<Gray, double>().CopyBlank();//empty image method two
//transfer binaryImage array to gray image matrix
for (int i = 0; i < binaryImage.Width; i++)
{
for (int j = 0; j < binaryImage.Height; j++)
{
if (imageBytes[i*binaryImage.Width+j] == 0)
{
//grey image only one channel
gray_image_div.Data[i, j, 0] = 0;
}
else if (imageBytes[i*binaryImage.Width+j] == 255)
{
gray_image_div.Data[i, j, 0] = 255;
}
}
}
return gray_image_div;
}