In my Xamarin Forms app, I have an image under androidProject/Resources/drawable/myImage.png. To load these from Xamarin, you can simply do
Image myImage = new Image() { Source = ImageSource.FromFile("myImage.png") };
However, there is no way to draw an Image using NGraphics. Instead, NGraphics DrawImage(IImage) requires an IImage. As far as I can tell, there's no way to turn a Xamarin.Forms.Image into an NGraphics.IImage. In fact, the only way I could find to load IImage is
IImage myImage = Platform.LoadImage("myImage.png");
However, this doesn't work because under the hood this uses BitmapFactory.decodeFile(), which requires the absolute file path. And I couldn't find any way to get the absolute file path of a resource (if it even exists?)
So, how do I actually load and display an image using NGraphics?
NGraphics does not provide any helpers to load images from your Platforms Resource files.
You could do something as follows. However, it will add some overhead converting back and forth between bitmap -> stream -> bitmap.
Android:
Stream GetDrawableStream(Context context, int resourceId)
{
var drawable = ResourcesCompat.GetDrawable(context.Resources, resourceId, context.Theme);
if (drawable is BitmapDrawable bitmapDrawable)
{
var stream = new MemoryStream();
var bitmap = bitmapDrawable.Bitmap;
bitmap.Compress(Bitmap.CompressFormat.Png, 80, stream);
bitmap.Recycle();
return stream;
}
return null;
}
iOS:
Stream GetImageStream(string fileName)
{
using (var image = UIImage.FromFile(fileName))
using (var imageData = image.AsPNG())
{
var byteArray = new byte[imageData.Length];
System.Runtime.InteropServices.Marshal.Copy(imageData.Bytes, byteArray, 0, Convert.ToInt32(imageData.Length));
var stream = new MemoryStream(byteArray);
return stream;
}
return null;
}
However, you could go directly from Bitmap to BitmapImage on Android instead like:
BitmapImage GetBitmapFromDrawable(Context context, int resourceId)
{
var drawable = ResourcesCompat.GetDrawable(context.Resources, resourceId, context.Theme);
if (drawable is BitmapDrawable bitmapDrawable)
{
var bitmap = bitmapDrawable.Bitmap;
return new BitmapImage(bitmap);
}
return null;
}
And on iOS:
CGImageImage GetImageStream(string fileName)
{
var iOSimage = UIImage.FromFile(fileName);
var cgImage = new CGImageImage(iOSImage.CGImage, iOSImage.Scale);
return cgImage;
}
BitmapImage and CGImageImage implement IImage in NGraphics.
Related
I am using the following sample to resize the uploaded images with Blazor WebAssembly
https://www.prowaretech.com/Computer/Blazor/Examples/WebApi/UploadImages .
Still I need the original file too to be converted to base64 too and I don't know how can I access it...
I tried to find the file's original width and height to pass its to RequestImageFileAsync function but no success...
I need to store both files : the original one and the resized one.
Can you help me, please ?
Thank You Very Much !
The InputFile control emits an IBrowserFile type. RequestImageFileAsync is a convenience method on IBrowserFile to resize the image and convert the type. The result is still an IBrowserFile.
One way to do what you are asking is with SixLabors.ImageSharp. Based on the ProWareTech example, something like this...
async Task OnChange(InputFileChangeEventArgs e)
{
var files = e.GetMultipleFiles(); // get the files selected by the users
foreach(var file in files)
{
//Original-sized file
var buf1 = new byte[file.Size];
using (var stream = file.OpenReadStream())
{
await stream.ReadAsync(buf1); // copy the stream to the buffer
}
origFilesBase64.Add(new ImageFile { base64data = Convert.ToBase64String(buf1), contentType = file.ContentType, fileName = file.Name }); // convert to a base64 string!!
//Resized File
var resizedFile = await file.RequestImageFileAsync(file.ContentType, 640, 480); // resize the image file
var buf = new byte[resizedFile.Size]; // allocate a buffer to fill with the file's data
using (var stream = resizedFile.OpenReadStream())
{
await stream.ReadAsync(buf); // copy the stream to the buffer
}
filesBase64.Add(new ImageFile { base64data = Convert.ToBase64String(buf), contentType = file.ContentType, fileName = file.Name }); // convert to a base64 string!!
}
//To get the image Sizes for first image
ImageSharp.Image origImage = Image.Load<*imagetype*>(origFilesBase64[0])
int origImgHeight = origImage.Height;
int origImgWidth = origImage.Width;
ImageSharp.Image resizedImage = Image.Load<*imagetype*>(filesBase64[0])
int resizedImgHeight = resizedImage.Height;
int resizedImgWidth = resizedImage.Width;
}
I'm trying to implement an UI where the user can edit and apply effects to an uploaded image, and want to save the BlendMode merged to the image. It's possible to save the result of the blended image or apply it using the Canvas?
There are some packages that apply some specific filters, but I want something more customizable for the end user.
I already saw some examples of how to implement Canvas to draw images, but can't figure it out how to use to load an image an apply the blend related in the docs. Anyone could give an example?
UPDATED:
For who has the same question, bellow follows the code with how to save a image from canvas to a file with blendMode applied.
But I still haven't the result expected. The quality of the image generated isn't the same as the original image, neither the blend seems to be the blend that i've applied. And i can't save as jpg, just as png file.
So, how can i load an image, apply a blend with canvas and save as a jpg file, without losing quality?
CODE:
const kCanvasSize = 200.0;
class CanvasImageToFile {
CanvasImageToFile._();
static final instance = CanvasImageToFile._();
ByteData _readFromFile(File file) {
// File file = getSomeCorrectFile();
Uint8List bytes = file.readAsBytesSync();
return ByteData.view(bytes.buffer);
}
Future<File> _writeToFile(ByteData data) async {
String dir = (await getTemporaryDirectory()).path;
String filePath = '$dir/tempImage.jpg';
final buffer = data.buffer;
return new File(filePath).writeAsBytes(
buffer.asUint8List(data.offsetInBytes, data.lengthInBytes));
}
Future<ui.Image> _loadImageSource(File imageSource) async {
// ByteData data = await rootBundle.load(asset);
ByteData data = _readFromFile(imageSource);
ui.Codec codec = await ui.instantiateImageCodec(data.buffer.asUint8List());
ui.FrameInfo fi = await codec.getNextFrame();
return fi.image;
}
Future<File> generateImage(File imageSource) async {
File imageResult;
ui.Image image;
await _loadImageSource(imageSource).then((value) {
image = value;
});
if (image != null) {
final recorder = ui.PictureRecorder();
var rect =
Rect.fromPoints(Offset(0.0, 0.0), Offset(kCanvasSize, kCanvasSize));
final canvas = Canvas(recorder, rect);
Size outputSize = rect.size;
Paint paint = new Paint();
//OVERLAY - BlendMode uses the previously drawn content as a mask
paint.blendMode = BlendMode.colorBurn;
paint.color = Colors.red;
// paint.colorFilter = ColorFilter.mode(Colors.blue, BlendMode.colorDodge);
// paint = Paint()..color = Colors.red;
// paint = Paint()..blendMode = BlendMode.multiply;
//Image
Size inputSize = Size(image.width.toDouble(), image.height.toDouble());
final FittedSizes fittedSizes =
applyBoxFit(BoxFit.cover, inputSize, outputSize);
final Size sourceSize = fittedSizes.source;
final Rect sourceRect =
Alignment.center.inscribe(sourceSize, Offset.zero & inputSize);
canvas.saveLayer(rect, paint);
canvas.drawImageRect(
image, sourceRect, rect, paint);
canvas.restore();
final picture = recorder.endRecording();
final img = await picture.toImage(200, 200);
final byteData = await img.toByteData(format: ImageByteFormat.png);
await _writeToFile(byteData).then((value) {
imageResult = value;
});
return imageResult;
}
After some research e some adjust at decoding image from png to rawUnmodified in my previous code using (Bitmap package), i could save the image with the original format (jpg) and achieved what i wanted. If there's anyone who have the same question, bellow follows the code to load an image with canvas, apply a blend and write to a file with the same quality:
Future<File> generateImage(
File imageSource, Color color, BlendMode blendMode) async {
File imageResult;
ui.Image image;
await _loadImageSource(imageSource).then((value) {
image = value;
});
if (image != null) {
final recorder = ui.PictureRecorder();
var rect = Rect.fromPoints(Offset(0.0, 0.0),
Offset(image.width.toDouble(), image.height.toDouble()));
final canvas = Canvas(recorder, rect);
Size outputSize = rect.size;
Paint paint = new Paint();
//OVERLAY - BlendMode uses the previously drawn content as a mask
// paint.blendMode = blendMode;
// paint.color = color;
paint.colorFilter = ColorFilter.mode(color, blendMode);
//Image
Size inputSize = Size(image.width.toDouble(), image.height.toDouble());
final FittedSizes fittedSizes =
applyBoxFit(BoxFit.contain, inputSize, outputSize);
final Size sourceSize = fittedSizes.source;
final Rect sourceRect =
Alignment.center.inscribe(sourceSize, Offset.zero & inputSize);
canvas.drawImageRect(image, sourceRect, rect, paint);
final picture = recorder.endRecording();
final img = await picture.toImage(image.width, image.height);
ByteData byteData =
await img.toByteData(format: ui.ImageByteFormat.rawUnmodified);
Bitmap bitmap = Bitmap.fromHeadless(
image.width, image.height, byteData.buffer.asUint8List());
Uint8List headedIntList = bitmap.buildHeaded();
await _writeToFile(headedIntList.buffer.asByteData()).then((value) {
imageResult = value;
});
return imageResult;
}
}
We have Xamarin Forms solution and in iOS project we are trying to create photo on button click. Problem is image is very dark. It is almost black. Why is this happening? Here is a code:
var _captureSession = new AVCaptureSession();
var _captureDevice = AVCaptureDevice.GetDefaultDevice(AVMediaType.Video);
var _captureDeviceInput = AVCaptureDeviceInput.FromDevice(_captureDevice);
_captureSession.AddInput(_captureDeviceInput);
_captureSession.StartRunning();
private async void OnButtonClick()
{
var output = new AVCaptureStillImageOutput { OutputSettings = new NSDictionary(AVVideo.CodecKey, AVVideo.CodecJPEG) };
_captureSession.AddOutput(output);
var buffer = await output.CaptureStillImageTaskAsync(output.Connections[0]);
NSData data = AVCaptureStillImageOutput.JpegStillToNSData(buffer);
UIImage image = UIImage.LoadFromData(data);
//image = RotateImage(image);
NSData imageData = image.AsPNG();
byte[] byteArray = imageData.ToArray();
IFolder folder = FileSystem.Current.LocalStorage;
IFile file = await folder.CreateFileAsync("image.png", CreationCollisionOption.ReplaceExisting);
using (Stream stream = await file.OpenAsync(PCLStorage.FileAccess.ReadAndWrite))
{
stream.Write(image, 0, image.Length);
}
}
Here is image:
Line:
_captureSession.AddOutput(output);
should go before button click (before StartRunning). With that change image has normal brightness and also rotation is not needed.
I have Xamarin Android project and I would like to recognize QR code from camera and save picture to storage at the same time. I used Android.Hardware.Camera.IPreviewCallback to get image from camera. Saving image works as expected but recognition of QR code fails. Here is my code:
void Android.Hardware.Camera.IPreviewCallback.OnPreviewFrame(byte[] data, Android.Hardware.Camera camera)
{
byte[] jpegData = ConvertYuvToJpeg(data);
Bitmap bitmap = BytesToBitmap(jpegData);
SaveBitmapImage(bitmap); // This works great
var width = (int)_textureView.Width;
var height = (int)_textureView.Height;
// How to get LuminanceSource??
//LuminanceSource source = new RGBLuminanceSource(rgbValues, bm.Width, bm.Height, RGBLuminanceSource.BitmapFormat.ARGB32);
//LuminanceSource source = new RGBLuminanceSource( jpegData, width, height);
LuminanceSource source = new PlanarYUVLuminanceSource(data, width, height,
0, 0, width, height, false);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(source));
QRCodeReader reader = new QRCodeReader();
var result = reader.decode(binaryBitmap);
}
Call to
var result = reader.decode(binaryBitmap);
always returns null.
Edit:
It seems that problem is with camera. It is not focusing on QR code, image is blurry and ZXing library is unable to decode it. How can I make camera focus?
Problem is with camera focus. Focus mode must be set. Here is a code:
var parameters = _camera.GetParameters();
parameters.FocusMode = GetOptimalFocusMode(parameters);
_camera.SetParameters(parameters);
private String GetOptimalFocusMode(Android.Hardware.Camera.Parameters parameters)
{
String result;
IList<String> focusModes = parameters.SupportedFocusModes;
if (focusModes.Contains(Android.Hardware.Camera.Parameters.FocusModeContinuousVideo))
{
result = Android.Hardware.Camera.Parameters.FocusModeContinuousVideo;
}
else if (focusModes.Contains(Android.Hardware.Camera.Parameters.FocusModeAuto))
{
result = Android.Hardware.Camera.Parameters.FocusModeAuto;
}
else
{
result = parameters.SupportedFocusModes.First();
}
return result;
}
I am working on Xamarin.Forms + CocosSharp Application. Here I want to load an image from an URL in cocoassharp using CCSprite. How can I achieve this? Normal CCSprite image is created like: var sprite = new CCSprite("image.png");
It is better to use async for stream and Read. I just did testing in place where that was not convenient but you should use async versions.
var webClient = new HttpClient();
var imageStream = webClient.GetStreamAsync(new Uri("https://xamarin.com/content/images/pages/forms/example-app.png")).Result;
byte[] imageBytes = new byte[imageStream.Length];
int read=0;
do
{
read += imageStream.Read(imageBytes, read, imageBytes.Length- read);
} while (read< imageBytes.Length);
CCTexture2D texture = new CCTexture2D(imageBytes);
var sprite = new CCSprite(texture);