How to encode and download png on Flutter web - download

I am trying to take a screenshot of a widget and save it as a png.
Its working fine for macos and ios but not for desktop(chrome).
The file downloaded from browser seems to be not encoded correct, I have tried a lot of different encodings but cant get it working.
Would be nice if someone knows how to encode the image so its downloaded correct in web too.
final boundary = _boundaryKey.currentContext.findRenderObject() as RenderRepaintBoundary;
final image = await boundary.toImage(pixelRatio: 2);
final byteData = await image.toByteData(format: ui.ImageByteFormat.png);
final pngBytes = byteData.buffer.asUint8List();
if (kIsWeb) {
final blob = html.Blob(<dynamic>[base64Encode(pngBytes)], 'image/png');
final anchorElement = html.AnchorElement(
href: html.Url.createObjectUrlFromBlob(blob),
)
..setAttribute('download', 'details.png')
..click();
} else {
await File('details_${widget.order.id}'.trim().replaceAll(' ', '_')).writeAsBytes(pngBytes);
}

The trick is to set 'application/octet-stream' when creating the Blob
final fileName = 'details_${widget.order.id.trim().replaceAll(' ', '_')}';
final boundary = _boundaryKey.currentContext.findRenderObject() as RenderRepaintBoundary;
final image = await boundary.toImage(pixelRatio: 2);
final byteData = await image.toByteData(format: ui.ImageByteFormat.png);
final pngBytes = byteData.buffer.asUint8List();
final blob = html.Blob(<dynamic>[pngBytes], 'application/octet-stream');
html.AnchorElement(href: html.Url.createObjectUrlFromBlob(blob))
..setAttribute('download', fileName)
..click();

Related

Pass an image to REST with POST API Call in flutter

I am trying to pass an image that is taken from either camera or picked from the gallery and pass it to the backend using a POST API call.
I am using image_picker plugin for flutter to access the image from camera and and gallery.
following is what I tried.
get the image file (from file path) in file format.
decrease the size of the image into a smaller value (uses flutter_image_compress library for this).
convert the result into a base64 string value or as form data and set it to the field in request body.
I will provide what I tried from the flutter code end for the above approach. please can anyone guide me am I doing this right or is there a better way to achieve this?
Pick image from camera
Future _getImageFromCamera() async {
PickedFile petImage = await picker.getImage(source: ImageSource.camera,maxHeight: 1000);
var _imageURITemp = File(petImage.path);
final filePath = _imageURITemp.absolute.path;
final lastIndex = filePath.lastIndexOf(new RegExp(r'.jp'));
final splitted = filePath.substring(0, (lastIndex));
final outPath = "${splitted}${filePath.substring(lastIndex)}";
final compressedImage = await FlutterImageCompress.compressAndGetFile(
filePath,
outPath,
minWidth: 1000,
minHeight: 1000,
quality: 50);
//
setState(() {
var str = compressedImage.path.split('/');
_imageURI = str[str.length - 1] as IO.File;
}
Uploading with the POST method*
String base64Image;
if(_imageURI != null) {
List<int> imageBytes = _imageURI.readAsBytesSync();
base64Image = base64.encode(imageBytes);
//
var data = {
'user_email': userEmail,
'user_token': userToken,
'pet': {
"age": petAgeController.text,
"birth_date": bdate,
'eatbone': ,
'ideal_weight': petIdealWeightController.text,
'image': base64Image,
'name': petNameController.text,
"sex": _petSex,
'weight': petWeightController.text,
'guideline_id': '1',
'activity_level_id': '2',
'breed_id': '12',
'user_id': userID,
}
};
// final PET.PetCreate
final pet = await CallApi().createThePet(data, 'pets/create')
}

Flutter - How to convert an NetworkImage into an ui.Image?

I need to convert a NetworkImage to an ui.Image.
I tried to use the given solution from this question with some adjusts but it isn't working.
Someone can help me?
Uint8List yourVar;
ui.Image image;
final DecoderCallback callback =
(Uint8List bytes, {int cacheWidth, int cacheHeight}) async {
yourVar = bytes.buffer.asUint8List();
var codec = await instantiateImageCodec(bytes,
targetWidth: cacheWidth, targetHeight: cacheHeight);
var frame = await codec.getNextFrame();
image = frame.image;
return image;
};
ImageProvider provider = NetworkImage(yourImageUrl);
provider.obtainKey(createLocalImageConfiguration(context)).then((key) {
provider.load(key, callback);
});
first create a field in your class:
var cache = MapCache<String, ui.Image>();
then to get ui.Image you can simply call:
var myUri = 'http:// ...';
var img = await cache.get(myUri, ifAbsent: (uri) {
print('getting not cached image from $uri');
return http.get(uri).then((resp) => decodeImageFromList(resp.bodyBytes));
});
print('image: $img');
of course you should add some http response error handling but this is the base idea...

Display image after capturing in Xamarin

I have been trying to make an application that invokes camera through page rendering. I have used Custom Renderer from Xamarin. My problem is I need to send the picture to the other page/activity in the "Native" after clicking, but currently it is saving the picture in the gallery of the device.
For example: I click the image and then the image gets displayed with the message "Do you want to save it?". This has to be done in native rather than PCL. I have been trying through intent but that doesn't work.
All my code right now doing is saving the image to the gallery.
try
{
var absolutePath = Android.OS.Environment.GetExternalStoragePublicDirectory(Android.OS.Environment.DirectoryDcim).AbsolutePath;
var folderPath = absolutePath + "/Camera";
var filePath = System.IO.Path.Combine(folderPath, string.Format("photo_{0}.jpg", Guid.NewGuid()));
var fileStream = new FileStream(filePath, FileMode.Create);
await image.CompressAsync(Bitmap.CompressFormat.Jpeg, 100, fileStream);
fileStream.Close();
image.Recycle();
// imageByte = ((byte[])image);
var intent = new Android.Content.Intent(Android.Content.Intent.ActionMediaScannerScanFile);
var file = new Java.IO.File(filePath);
var uri = Android.Net.Uri.FromFile(file);
intent.SetData(uri);
//intent.PutExtra("image", imageByte);
MainActivity.Instances.SendBroadcast(intent);
}
Solved it. Passing bitmap through intent.
Activity 1:
byte[] imageByte;
var image = textureView.Bitmap;
MemoryStream memStream = new MemoryStream();
// ByteArrayOutputStream _bs = new ByteArrayOutputStream();
await image.CompressAsync(Bitmap.CompressFormat.Jpeg, 100, memStream);
imageByte = memStream.ToArray();
Intent i = new Intent(this.Context, typeof(CameraDisplay));
i.PutExtra("image", imageByte);
activity.StartActivity(i);
Activity 2:
byte[] Image = Intent.GetByteArrayExtra("image");
imageView = FindViewById(Resource.Id.imageView1);
Bitmap bitmap = BitmapFactory.DecodeByteArray(Image, 0, Image.Length);
imageView.SetImageBitmap(bitmap);

Xamarin Forms UWP Capture Screenshot Include Signature Pad

I have a Xamarin Forms page using Signature Pad (https://github.com/xamarin/SignaturePad). I'm attempting to capture a screenshot of the entire view. It should include the signature as well.
However, using the following code I'm noticing the signature does not show up.
What is the best way to capture the full Page including the signature? (not just the signature)
public class ScreenshotService : IScreenshotService
{
public async Task<byte[]> CaptureAsync()
{
var rtb = new RenderTargetBitmap();
await rtb.RenderAsync(Window.Current.Content);
var pixelBuffer = await rtb.GetPixelsAsync();
var pixels = pixelBuffer.ToArray();
// Useful for rendering in the correct DPI
var displayInformation = DisplayInformation.GetForCurrentView();
var stream = new InMemoryRandomAccessStream();
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, stream);
encoder.SetPixelData(BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)rtb.PixelWidth,
(uint)rtb.PixelHeight,
displayInformation.RawDpiX,
displayInformation.RawDpiY,
pixels);
await encoder.FlushAsync();
stream.Seek(0);
var readStram = stream.AsStreamForRead();
var bytes = new byte[readStram.Length];
readStram.Read(bytes, 0, bytes.Length);
return bytes;
}
}
According to the "XAML visuals and RenderTargetBitmap capture capabilities" of RenderTargetBitmap class:
Content that can't be captured will appear as blank in the captured image, but other content in the same visual tree can still be captured and will render (the presence of content that can't be captured won't invalidate the entire capture of that XAML composition).
So it could be that the content of InkCanvas is not captureable. However, you can use Win2D. For more you could refer the following code.
public async Task<Stream> CaptureAsync(Stream Tem)
{
var rtb = new RenderTargetBitmap();
await rtb.RenderAsync(Window.Current.Content);
var pixelBuffer = await rtb.GetPixelsAsync();
var pixels = pixelBuffer.ToArray();
var displayInformation = DisplayInformation.GetForCurrentView();
var stream = new InMemoryRandomAccessStream();
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, stream);
encoder.SetPixelData(BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)rtb.PixelWidth,
(uint)rtb.PixelHeight,
displayInformation.RawDpiX,
displayInformation.RawDpiY,
pixels);
await encoder.FlushAsync();
stream.Seek(0);
var readStram = stream.AsStreamForRead();
var pagebitmap = await GetSoftwareBitmap(readStram);
var softwareBitmap = await GetSoftwareBitmap(Tem);
CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, rtb.PixelWidth, rtb.PixelHeight, 96);
using (var ds = renderTarget.CreateDrawingSession())
{
ds.Clear(Colors.White);
var page = CanvasBitmap.CreateFromSoftwareBitmap(device, pagebitmap);
var image = CanvasBitmap.CreateFromSoftwareBitmap(device, softwareBitmap);
ds.DrawImage(page);
ds.DrawImage(image);
}
InMemoryRandomAccessStream randomAccessStream = new InMemoryRandomAccessStream();
await renderTarget.SaveAsync(randomAccessStream, CanvasBitmapFileFormat.Jpeg, 1f);
return randomAccessStream.AsStream();
}
private async Task<SoftwareBitmap> GetSoftwareBitmap(Stream data)
{
BitmapDecoder pagedecoder = await BitmapDecoder.CreateAsync(data.AsRandomAccessStream());
return await pagedecoder.GetSoftwareBitmapAsync(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
}
IScreenshotServicecs interface
public interface IScreenshotServicecs
{
Task<Stream> CaptureAsync(Stream stream);
}
Usage
var stream = await SignatureView.GetImageStreamAsync(SignaturePad.Forms.SignatureImageFormat.Png);
var data = await DependencyService.Get<IScreenshotServicecs>().CaptureAsync(stream);
MyImage.Source = ImageSource.FromStream(() => data);
Here is my final implementation including converting to byte array.
public async Task<byte[]> CaptureAsync(Stream signatureStream)
{
var rtb = new RenderTargetBitmap();
await rtb.RenderAsync(Window.Current.Content);
var pixelBuffer = await rtb.GetPixelsAsync();
var pixels = pixelBuffer.ToArray();
var displayInformation = DisplayInformation.GetForCurrentView();
var stream = new InMemoryRandomAccessStream();
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, stream);
encoder.SetPixelData(BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Premultiplied,
(uint)rtb.PixelWidth,
(uint)rtb.PixelHeight,
displayInformation.RawDpiX,
displayInformation.RawDpiY,
pixels);
await encoder.FlushAsync();
stream.Seek(0);
var readStram = stream.AsStreamForRead();
var pagebitmap = await GetSoftwareBitmap(readStram);
var softwareBitmap = await GetSoftwareBitmap(signatureStream);
CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, rtb.PixelWidth, rtb.PixelHeight, 96);
using (var ds = renderTarget.CreateDrawingSession())
{
ds.Clear(Colors.White);
var page = CanvasBitmap.CreateFromSoftwareBitmap(device, pagebitmap);
var image = CanvasBitmap.CreateFromSoftwareBitmap(device, softwareBitmap);
ds.DrawImage(page);
ds.DrawImage(image, 50, 55);
}
InMemoryRandomAccessStream randomAccessStream = new InMemoryRandomAccessStream();
await renderTarget.SaveAsync(randomAccessStream, CanvasBitmapFileFormat.Jpeg, 1f);
var fileBytes = new byte[randomAccessStream.Size];
using (var reader = new DataReader(randomAccessStream))
{
await reader.LoadAsync((uint)randomAccessStream.Size);
reader.ReadBytes(fileBytes);
}
return fileBytes;
}

ImageProcessorCore: Attempt to resample image results in zero-length response

I am trying to resample a JPG image from 300dpi to 150dpi and am getting back a zero-length file.
Controller's ActionResult:
public ActionResult ViewImage(string file, int dpi = 300, bool log = true)
{
FileExtensions fileExtensions = new FileExtensions();
ImageExtensions imageExtensions = new ImageExtensions();
FileModel fileModel = fileExtensions.GetFileModel(file);
string contentType = fileModel.FileType;
byte[] fileData = fileModel.FileData;
string fileName = Path.GetFileNameWithoutExtension(fileModel.FileName) + "_" + dpi + "DPI" + Path.GetExtension(fileModel.FileName);
FileStreamResult resampledImage = imageExtensions.ResampleImage(fileData, contentType, dpi);
resampledImage.FileDownloadName = fileName;
return resampledImage;
}
ResampleImage method:
public FileStreamResult ResampleImage(byte[] fileData, string contentType, int targetDPI)
{
MemoryStream outputStream = new MemoryStream();
using (Stream sourceStream = new MemoryStream(fileData))
{
Image image = new Image(sourceStream);
image.HorizontalResolution = targetDPI;
image.VerticalResolution = targetDPI;
JpegEncoder jpegEncoder = new JpegEncoder();
jpegEncoder.Quality = 100;
image.Save(outputStream, jpegEncoder);
}
FileStreamResult file = new FileStreamResult(outputStream, contentType);
return file;
}
I thought I best answer here since we've already dealt with it on the issue tracker.
ImageProcessorCore at present (2016-08-03) is alpha software and as such is unfinished. When you were having the issue, horizontal and vertical resolution was not settable in jpeg images. This is now solved.
Incidentally there are overloads that allow saving as jpeg without having to create your own JpegEncoder instance.
image.SaveAsJpeg(outputStream);

Resources