Flutter - How to convert an NetworkImage into an ui.Image? - image

I need to convert a NetworkImage to an ui.Image.
I tried to use the given solution from this question with some adjusts but it isn't working.
Someone can help me?
Uint8List yourVar;
ui.Image image;
final DecoderCallback callback =
(Uint8List bytes, {int cacheWidth, int cacheHeight}) async {
yourVar = bytes.buffer.asUint8List();
var codec = await instantiateImageCodec(bytes,
targetWidth: cacheWidth, targetHeight: cacheHeight);
var frame = await codec.getNextFrame();
image = frame.image;
return image;
};
ImageProvider provider = NetworkImage(yourImageUrl);
provider.obtainKey(createLocalImageConfiguration(context)).then((key) {
provider.load(key, callback);
});

first create a field in your class:
var cache = MapCache<String, ui.Image>();
then to get ui.Image you can simply call:
var myUri = 'http:// ...';
var img = await cache.get(myUri, ifAbsent: (uri) {
print('getting not cached image from $uri');
return http.get(uri).then((resp) => decodeImageFromList(resp.bodyBytes));
});
print('image: $img');
of course you should add some http response error handling but this is the base idea...

Related

Correct way to passed the stream image to tensorflow-lite model using Xamarin

im currently developing Image Classification using xamarin form, the model works well and the prediction result is good, but my problem is, it is only works in PickPhotoAsync passing to MediaFile then the MediaFile > get stream > get the byte > feed to model. i just used this for testing only
What i really need to do is to get the image from SignaturePad > get stream > get the Byte > feed to model
i got wrong prediction when i change it to signaturePad. i hope someone will help me, thank you in advance
here's my code.
protected async void PickPhotoAsync(object sender, EventArgs e)
{
var file = await CrossMedia.Current.PickPhotoAsync();
HandlePhoto(file);
}
private void HandlePhoto(MediaFile file)
{
var stream = file.GetStreamWithImageRotatedForExternalStorage();
var memoryStream = new MemoryStream();
stream.CopyTo(memoryStream);
var bytes = memoryStream.ToArray();
var len = stream.Length;
var pos = stream.Position;
var read = stream.CanRead;
var lens = bytes.Length;
List<ImageClassificationModel> classifyImage = DependencyService.Get<IClassify>().Classify(bytes);
var sortedList = classifyImage.OrderByDescending(x => x.Probability);
var top = sortedList.First();//Highest Prediction Result
var max = sortedList.Max(x => x.Probability);
}
Here is my code when i change it to SignaturePad (Source of image) which i got bad result
private async void SaveImagePad(object sender, EventArgs e)
{
Stream image = await PadView.GetImageStreamAsync(SignatureImageFormat.Png);
//get the stream from SignaturePad
if (image == null)
return;
BinaryReader br = new BinaryReader(image);
Byte[] All = br.ReadBytes((int)image.Length);
byte[] acImage = (byte[])All;
//Convert To Byte before passing to classifier
List<ImageClassificationModel> classifyImage = DependencyService.Get<IClassify>().Classify(acImage);
//File.Delete(path);
var sortedList = classifyImage.OrderByDescending(x => x.Probability);
var top = sortedList.First();
var max = sortedList.Max(x => x.Probability); // i got bad and random result
}

How to encode and download png on Flutter web

I am trying to take a screenshot of a widget and save it as a png.
Its working fine for macos and ios but not for desktop(chrome).
The file downloaded from browser seems to be not encoded correct, I have tried a lot of different encodings but cant get it working.
Would be nice if someone knows how to encode the image so its downloaded correct in web too.
final boundary = _boundaryKey.currentContext.findRenderObject() as RenderRepaintBoundary;
final image = await boundary.toImage(pixelRatio: 2);
final byteData = await image.toByteData(format: ui.ImageByteFormat.png);
final pngBytes = byteData.buffer.asUint8List();
if (kIsWeb) {
final blob = html.Blob(<dynamic>[base64Encode(pngBytes)], 'image/png');
final anchorElement = html.AnchorElement(
href: html.Url.createObjectUrlFromBlob(blob),
)
..setAttribute('download', 'details.png')
..click();
} else {
await File('details_${widget.order.id}'.trim().replaceAll(' ', '_')).writeAsBytes(pngBytes);
}
The trick is to set 'application/octet-stream' when creating the Blob
final fileName = 'details_${widget.order.id.trim().replaceAll(' ', '_')}';
final boundary = _boundaryKey.currentContext.findRenderObject() as RenderRepaintBoundary;
final image = await boundary.toImage(pixelRatio: 2);
final byteData = await image.toByteData(format: ui.ImageByteFormat.png);
final pngBytes = byteData.buffer.asUint8List();
final blob = html.Blob(<dynamic>[pngBytes], 'application/octet-stream');
html.AnchorElement(href: html.Url.createObjectUrlFromBlob(blob))
..setAttribute('download', fileName)
..click();

Pass an image to REST with POST API Call in flutter

I am trying to pass an image that is taken from either camera or picked from the gallery and pass it to the backend using a POST API call.
I am using image_picker plugin for flutter to access the image from camera and and gallery.
following is what I tried.
get the image file (from file path) in file format.
decrease the size of the image into a smaller value (uses flutter_image_compress library for this).
convert the result into a base64 string value or as form data and set it to the field in request body.
I will provide what I tried from the flutter code end for the above approach. please can anyone guide me am I doing this right or is there a better way to achieve this?
Pick image from camera
Future _getImageFromCamera() async {
PickedFile petImage = await picker.getImage(source: ImageSource.camera,maxHeight: 1000);
var _imageURITemp = File(petImage.path);
final filePath = _imageURITemp.absolute.path;
final lastIndex = filePath.lastIndexOf(new RegExp(r'.jp'));
final splitted = filePath.substring(0, (lastIndex));
final outPath = "${splitted}${filePath.substring(lastIndex)}";
final compressedImage = await FlutterImageCompress.compressAndGetFile(
filePath,
outPath,
minWidth: 1000,
minHeight: 1000,
quality: 50);
//
setState(() {
var str = compressedImage.path.split('/');
_imageURI = str[str.length - 1] as IO.File;
}
Uploading with the POST method*
String base64Image;
if(_imageURI != null) {
List<int> imageBytes = _imageURI.readAsBytesSync();
base64Image = base64.encode(imageBytes);
//
var data = {
'user_email': userEmail,
'user_token': userToken,
'pet': {
"age": petAgeController.text,
"birth_date": bdate,
'eatbone': ,
'ideal_weight': petIdealWeightController.text,
'image': base64Image,
'name': petNameController.text,
"sex": _petSex,
'weight': petWeightController.text,
'guideline_id': '1',
'activity_level_id': '2',
'breed_id': '12',
'user_id': userID,
}
};
// final PET.PetCreate
final pet = await CallApi().createThePet(data, 'pets/create')
}

Chromeless - get all images src from a webpage

I'm trying to get the src values for all img tags in an HTML page using Chromeless. My current implementation is something like this:
async function run() {
const chromeless = new Chromeless();
let url = 'http://someurl/somepath.html';
var allImgUrls = await chromeless
.goto(url)
.evaluate(() => document.getElementsByTagName('img'));
var htmlContent = await chromeless
.goto(url)
.evaluate(() => document.documentElement.outerHTML );
console.log(allImgUrls);
await chromeless.end()
}
The issue is, I'm not getting any values of img object in the allImgUrls.
After some research, found out that we could use this approach:
var imgSrcs = await chromeless
.goto(url)
.evaluate(() => {
/// since document.querySelectorAll doesn't actually return an array but a Nodelist (similar to array)
/// we call the map function from Array.prototype which is equivalent to [].map.call()
const srcs = [].map.call(document.querySelectorAll('img'), img => img.src);
return JSON.stringify(srcs);
});

Xamarin Forms UWP Capture Screenshot Include Signature Pad

I have a Xamarin Forms page using Signature Pad (https://github.com/xamarin/SignaturePad). I'm attempting to capture a screenshot of the entire view. It should include the signature as well.
However, using the following code I'm noticing the signature does not show up.
What is the best way to capture the full Page including the signature? (not just the signature)
public class ScreenshotService : IScreenshotService
{
public async Task<byte[]> CaptureAsync()
{
var rtb = new RenderTargetBitmap();
await rtb.RenderAsync(Window.Current.Content);
var pixelBuffer = await rtb.GetPixelsAsync();
var pixels = pixelBuffer.ToArray();
// Useful for rendering in the correct DPI
var displayInformation = DisplayInformation.GetForCurrentView();
var stream = new InMemoryRandomAccessStream();
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, stream);
encoder.SetPixelData(BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)rtb.PixelWidth,
(uint)rtb.PixelHeight,
displayInformation.RawDpiX,
displayInformation.RawDpiY,
pixels);
await encoder.FlushAsync();
stream.Seek(0);
var readStram = stream.AsStreamForRead();
var bytes = new byte[readStram.Length];
readStram.Read(bytes, 0, bytes.Length);
return bytes;
}
}
According to the "XAML visuals and RenderTargetBitmap capture capabilities" of RenderTargetBitmap class:
Content that can't be captured will appear as blank in the captured image, but other content in the same visual tree can still be captured and will render (the presence of content that can't be captured won't invalidate the entire capture of that XAML composition).
So it could be that the content of InkCanvas is not captureable. However, you can use Win2D. For more you could refer the following code.
public async Task<Stream> CaptureAsync(Stream Tem)
{
var rtb = new RenderTargetBitmap();
await rtb.RenderAsync(Window.Current.Content);
var pixelBuffer = await rtb.GetPixelsAsync();
var pixels = pixelBuffer.ToArray();
var displayInformation = DisplayInformation.GetForCurrentView();
var stream = new InMemoryRandomAccessStream();
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, stream);
encoder.SetPixelData(BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)rtb.PixelWidth,
(uint)rtb.PixelHeight,
displayInformation.RawDpiX,
displayInformation.RawDpiY,
pixels);
await encoder.FlushAsync();
stream.Seek(0);
var readStram = stream.AsStreamForRead();
var pagebitmap = await GetSoftwareBitmap(readStram);
var softwareBitmap = await GetSoftwareBitmap(Tem);
CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, rtb.PixelWidth, rtb.PixelHeight, 96);
using (var ds = renderTarget.CreateDrawingSession())
{
ds.Clear(Colors.White);
var page = CanvasBitmap.CreateFromSoftwareBitmap(device, pagebitmap);
var image = CanvasBitmap.CreateFromSoftwareBitmap(device, softwareBitmap);
ds.DrawImage(page);
ds.DrawImage(image);
}
InMemoryRandomAccessStream randomAccessStream = new InMemoryRandomAccessStream();
await renderTarget.SaveAsync(randomAccessStream, CanvasBitmapFileFormat.Jpeg, 1f);
return randomAccessStream.AsStream();
}
private async Task<SoftwareBitmap> GetSoftwareBitmap(Stream data)
{
BitmapDecoder pagedecoder = await BitmapDecoder.CreateAsync(data.AsRandomAccessStream());
return await pagedecoder.GetSoftwareBitmapAsync(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
}
IScreenshotServicecs interface
public interface IScreenshotServicecs
{
Task<Stream> CaptureAsync(Stream stream);
}
Usage
var stream = await SignatureView.GetImageStreamAsync(SignaturePad.Forms.SignatureImageFormat.Png);
var data = await DependencyService.Get<IScreenshotServicecs>().CaptureAsync(stream);
MyImage.Source = ImageSource.FromStream(() => data);
Here is my final implementation including converting to byte array.
public async Task<byte[]> CaptureAsync(Stream signatureStream)
{
var rtb = new RenderTargetBitmap();
await rtb.RenderAsync(Window.Current.Content);
var pixelBuffer = await rtb.GetPixelsAsync();
var pixels = pixelBuffer.ToArray();
var displayInformation = DisplayInformation.GetForCurrentView();
var stream = new InMemoryRandomAccessStream();
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, stream);
encoder.SetPixelData(BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)rtb.PixelWidth,
(uint)rtb.PixelHeight,
displayInformation.RawDpiX,
displayInformation.RawDpiY,
pixels);
await encoder.FlushAsync();
stream.Seek(0);
var readStram = stream.AsStreamForRead();
var pagebitmap = await GetSoftwareBitmap(readStram);
var softwareBitmap = await GetSoftwareBitmap(signatureStream);
CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, rtb.PixelWidth, rtb.PixelHeight, 96);
using (var ds = renderTarget.CreateDrawingSession())
{
ds.Clear(Colors.White);
var page = CanvasBitmap.CreateFromSoftwareBitmap(device, pagebitmap);
var image = CanvasBitmap.CreateFromSoftwareBitmap(device, softwareBitmap);
ds.DrawImage(page);
ds.DrawImage(image, 50, 55);
}
InMemoryRandomAccessStream randomAccessStream = new InMemoryRandomAccessStream();
await renderTarget.SaveAsync(randomAccessStream, CanvasBitmapFileFormat.Jpeg, 1f);
var fileBytes = new byte[randomAccessStream.Size];
using (var reader = new DataReader(randomAccessStream))
{
await reader.LoadAsync((uint)randomAccessStream.Size);
reader.ReadBytes(fileBytes);
}
return fileBytes;
}

Resources