Structure of SKPictureRecorder - xamarin

I am trying to use SKPictureRecorder of SkiaSharp to draw a path in the SKCanvas. The image is not saved in the disk. Please find the code below for your reference.
SKPictureRecorder record = new SKPictureRecorder();
SKCanvas tempCanvas= record.BeginRecording(new SKRect(0,0,640,480));
tempCanvas.Clear(SKColors.White);
// set up drawing tools
using (var paint = new SKPaint())
{
paint.IsAntialias = true;
paint.Color = new SKColor(0x2c, 0x3e, 0x50);
paint.StrokeCap = SKStrokeCap.Round;
// create the Xamagon path
using (var path = new SKPath())
{
path.MoveTo(71.4311121f, 56f);
path.CubicTo(68.6763107f, 56.0058575f, 65.9796704f, 57.5737917f, 64.5928855f, 59.965729f);
path.LineTo(43.0238921f, 97.5342563f);
path.CubicTo(41.6587026f, 99.9325978f, 41.6587026f, 103.067402f, 43.0238921f, 105.465744f);
path.LineTo(64.5928855f, 143.034271f);
path.CubicTo(65.9798162f, 145.426228f, 68.6763107f, 146.994582f, 71.4311121f, 147f);
path.LineTo(114.568946f, 147f);
path.CubicTo(117.323748f, 146.994143f, 120.020241f, 145.426228f, 121.407172f, 143.034271f);
path.LineTo(142.976161f, 105.465744f);
path.CubicTo(144.34135f, 103.067402f, 144.341209f, 99.9325978f, 142.976161f, 97.5342563f);
path.LineTo(121.407172f, 59.965729f);
path.CubicTo(120.020241f, 57.5737917f, 117.323748f, 56.0054182f, 114.568946f, 56f);
path.LineTo(71.4311121f, 56f);
path.Close();
// draw the Xamagon path
tempCanvas.DrawPath(path, paint);
}
}
SKPicture picture= record.EndRecording();
using (var image = SKImage.FromPicture(picture, new SKSizeI(640, 480)))
using (var data = image.Encode(SKEncodedImageFormat.Png, 80))
{
// save the data to a stream
using (var stream = File.OpenWrite("testing.png"))
{
data.SaveTo(stream);
}
}
Can you check any issue with the code?
Also please let us me know the structure of SKPictureRecorder of SkiaSharp.
Thanks in advance.
Regards,
Sabari

Related

How to load/draw an image using NGraphics?

In my Xamarin Forms app, I have an image under androidProject/Resources/drawable/myImage.png. To load these from Xamarin, you can simply do
Image myImage = new Image() { Source = ImageSource.FromFile("myImage.png") };
However, there is no way to draw an Image using NGraphics. Instead, NGraphics DrawImage(IImage) requires an IImage. As far as I can tell, there's no way to turn a Xamarin.Forms.Image into an NGraphics.IImage. In fact, the only way I could find to load IImage is
IImage myImage = Platform.LoadImage("myImage.png");
However, this doesn't work because under the hood this uses BitmapFactory.decodeFile(), which requires the absolute file path. And I couldn't find any way to get the absolute file path of a resource (if it even exists?)
So, how do I actually load and display an image using NGraphics?
NGraphics does not provide any helpers to load images from your Platforms Resource files.
You could do something as follows. However, it will add some overhead converting back and forth between bitmap -> stream -> bitmap.
Android:
Stream GetDrawableStream(Context context, int resourceId)
{
var drawable = ResourcesCompat.GetDrawable(context.Resources, resourceId, context.Theme);
if (drawable is BitmapDrawable bitmapDrawable)
{
var stream = new MemoryStream();
var bitmap = bitmapDrawable.Bitmap;
bitmap.Compress(Bitmap.CompressFormat.Png, 80, stream);
bitmap.Recycle();
return stream;
}
return null;
}
iOS:
Stream GetImageStream(string fileName)
{
using (var image = UIImage.FromFile(fileName))
using (var imageData = image.AsPNG())
{
var byteArray = new byte[imageData.Length];
System.Runtime.InteropServices.Marshal.Copy(imageData.Bytes, byteArray, 0, Convert.ToInt32(imageData.Length));
var stream = new MemoryStream(byteArray);
return stream;
}
return null;
}
However, you could go directly from Bitmap to BitmapImage on Android instead like:
BitmapImage GetBitmapFromDrawable(Context context, int resourceId)
{
var drawable = ResourcesCompat.GetDrawable(context.Resources, resourceId, context.Theme);
if (drawable is BitmapDrawable bitmapDrawable)
{
var bitmap = bitmapDrawable.Bitmap;
return new BitmapImage(bitmap);
}
return null;
}
And on iOS:
CGImageImage GetImageStream(string fileName)
{
var iOSimage = UIImage.FromFile(fileName);
var cgImage = new CGImageImage(iOSImage.CGImage, iOSImage.Scale);
return cgImage;
}
BitmapImage and CGImageImage implement IImage in NGraphics.

Load CCSprite Image from URL - CocosSharp + Xamarin.forms

I am working on Xamarin.Forms + CocosSharp Application. Here I want to load an image from an URL in cocoassharp using CCSprite. How can I achieve this? Normal CCSprite image is created like: var sprite = new CCSprite("image.png");
It is better to use async for stream and Read. I just did testing in place where that was not convenient but you should use async versions.
var webClient = new HttpClient();
var imageStream = webClient.GetStreamAsync(new Uri("https://xamarin.com/content/images/pages/forms/example-app.png")).Result;
byte[] imageBytes = new byte[imageStream.Length];
int read=0;
do
{
read += imageStream.Read(imageBytes, read, imageBytes.Length- read);
} while (read< imageBytes.Length);
CCTexture2D texture = new CCTexture2D(imageBytes);
var sprite = new CCSprite(texture);

Unable to port Lumia imaging SDK2.0 to SDK 3.0(UWP)

I am having a tough time converting lumia imaging SDK 2.0 code to SDK3.0 in below specific case. I used to increase/decrease the image quality of JPG file using below code in Windows phone 8.1 RT apps:
using (StreamImageSource source = new StreamImageSource(fileStream.AsStreamForRead()))
{
IFilterEffect effect = new FilterEffect(source);
using (JpegRenderer renderer = new JpegRenderer(effect))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0; // higher value means better quality
compressedImageBytes = await renderer.RenderAsync();
}
}
Now since FilterEffect class has been replaced in SDK 3.0 with EffectList(), I changed code to
using (BufferProviderImageSource source = new BufferProviderImageSource(fileStream.AsBufferProvider()))
{
using (JpegRenderer renderer = new JpegRenderer())
{
IImageProvider2 source1 = new EffectList() { Source = source };
renderer.Source = source1;
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
try
{
var img = await renderer.RenderAsync();
}
catch (Exception ex)
{
;
}
}
}
I am getting InvalidCastException exception. I have tried several combinations but no luck.
I don't really know what is going on with the InvalidCastException, we can continue that discussion in the comments as it will most likely need some back-and-forth.
That said, you could continue without the effect list, and chain effects in the normal way. So to rewrite your scenario:
using (var soruce = new StreamImageSource(...))
using (var renderer = new JpegRenderer(source))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
var img = await renderer.RenderAsync();
}
If you wanted to add an effect (for example a CarttonEffect), just do:
using (var soruce = new StreamImageSource(...))
using (var caroonEffect = new CartoonEffect(source))
using (var renderer = new JpegRenderer(caroonEffect))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
var img = await renderer.RenderAsync();
}
and so on. If you had effects A, B, C and D just make a chain Source -> A -> B -> C -> D -> JpegRenderer.
I am on VS 2015 community version. While digging around this, I got below code working which works exactly same as SDK 2.0. All I did was specified the Size of JpegRenderer. It works for all landscape images but fails to transform the portrait images to correct orientation. There is no exception but result of portrait image is widely stretched landscape image.
I initialized the Size for portrait images to Size(765, 1024) but no impact.
using (JpegRenderer renderer = new JpegRenderer(source))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
try
{
var info = await source.GetInfoAsync();
renderer.Size = new Size(1024, 765);
compressedImageBytes = await renderer.RenderAsync();
}
catch (Exception ex)
{
new MessageDialog("Error while compressing.").ShowAsync();
}
}
I am sorry the working code was using BufferProviderImageSource instead StreamImageSource. Below is the snippet. Few points here:
1) If I don't use Size property I get "The component cannot be found exception".
2) GetInfoAsync(): Yes it was useless for above code but I need to use it to know if image is Landscape or Portrait so that I can initialize Size property of resultant image.
3) If Size property goes beyond 1024x1024 for portrait images I get the exception "Value does not fall within the expected range"
Why lumia made this version so tricky. :(
var stream = FileIO.ReadBufferAsync(file);
using (var source = new BufferProviderImageSource(stream.AsBufferProvider()))
{
EffectList list = new EffectList() { Source = source };
using (JpegRenderer renderer = new JpegRenderer(list))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
renderer.OutputOption = OutputOption.PreserveAspectRatio;
try
{
var info = await source.GetInfoAsync();
double width = 0;
double height = 0;
if (info.ImageSize.Width > info.ImageSize.Height) //landscape
{
width = 1024;
height = 765;
if (info.ImageSize.Width < 1024)
width = info.ImageSize.Width;
if (info.ImageSize.Height < 765)
height = info.ImageSize.Height;
}
else //portrait..
{
width = 765;
height = 1024;
if (info.ImageSize.Width < 765)
width = info.ImageSize.Width;
if (info.ImageSize.Height < 1024)
height = info.ImageSize.Height;
}
renderer.Size = new Size(width, height);
compressedImageBytes = await renderer.RenderAsync();
}
catch (Exception ex)
{
new MessageDialog(ex.Message).ShowAsync();
}
}
}

nokia Imaging SDK customize BlendFilter

I have created this code
Uri _blendImageUri = new Uri(#"Assets/1.png", UriKind.Relative);
var _blendImageProvider = new StreamImageSource((System.Windows.Application.GetResourceStream(_blendImageUri).Stream));
var bf = new BlendFilter(_blendImageProvider);
Filter work nice. But I want change image size for ForegroundSource property. How can I load image with my size?
If I understood you correctly you are trying to blend ForegroundSource with only a part of the original image? That is called local blending at it is currently not supported on the BlendFilter itself.
You can however use ReframingFilter to reframe the ForegroundSource and then blend it. Your chain will look like something like this:
using (var mainImage = new StreamImageSource(...))
using (var filterEffect = new FilterEffect(mainImage))
{
using (var secondaryImage = new StreamImageSource(...))
using (var secondaryFilterEffect = new FilterEffect(secondaryImage))
using (var reframing = new ReframingFilter(new Rect(0, 0, 500, 500), 0)) //reframe your image, thus "setting" the location and size of the content when blending
{
secondaryFilterEffect.Filters = new [] { reframing };
using (var blendFilter = new BlendFilter(secondaryFilterEffect)
using (var renderer = new JpegRenderer(filterEffect))
{
filterEffect.Filters = new [] { blendFilter };
await renderer.RenderAsync();
}
}
}
As you can see, you can use the reframing filter to position the content of your ForegroundSource so that it will only blend locally. Note that when reframeing you can set the borders outside of the image location (for example new Rect(-100, -100, 500, 500)) and the areas outside of the image will appear as black transparent areas - exactly what you need in BlendFilter.

Flex 4.6 Optimizing View appearance ContentCache VS override data for Mobile App

I read in this article http://www.adobe.com/devnet/flex/articles/flex-mobile-performance-checklist.html that I should not initialize a View's appearance in a creationComplete handler. Instead, I should change view's appearance in an overridden data setter.
The section in the article is:
Override the data setter instead of using bindings or initializing a View's appearance in a creationComplete handler
1-First, I would like to know if I got this right by doing the following:
//My code is loading a set of images and adding them in a View.
//On creationComplete of the View I am adding the images in case this is the first time
//the view is shown. In case the view has been already accessed I use the data:
protected function view1_creationCompleteHandler(event:FlexEvent):void
{
if(!data) //On first creation of the view I create the data object
{
data = new Object();
data.imageArray = new Array(); //set an array that will cache my images.
for(var i:int = 0; i<36;i++)
{
var img:Image = new Image();
img.source = 'assets/0'+i.toString()+'.png';
container.addElement(img);
(data.imageArray as Array).push(img);//Override the data for next time!
}
}
else//Next time use the save images
{
for(var ix:int = 0; ix<(data.imageArray as Array).length;ix++)
{
container.addElement((data.imageArray as Array)[ix]);
}
}
}
If I am doing this correctly, I would like to know which approach is best. The above one, or the next one I am going to show which uses the images contentLoader with caching and queuing enabled with a ContentCache:
protected function view1_creationCompleteHandler(event:FlexEvent):void
{
{
for(var i:int = 0; i<36;i++)
{
var img:Image = new Image();
img.contentLoader = ldr;
img.contentLoaderGrouping = 'gr1';
img.source = 'assets/0'+i.toString()+'.png';
container.addElement(img);
}
}
<fx:Declarations>
<s:ContentCache id="ldr" enableQueueing="true"
maxActiveRequests="1" maxCacheEntries="36"/>
</fx:Declarations>
Also if someone could tell me what is the contentLoaderGrouping for. I would be very grateful.
Thanks a lot!!!
PS:By the way both approaches work. The first approach is instant while the second approach shows the images beeing added in a very smooth way which actually gives a cool effect.
Neither. The point of the suggestion was to NOT alter the displaylist after creationComplete, which requires an additional update cycle. Instead you should inject the data property when you push your view on the stack, and initiate your changes in the setter. Using the ContentCache has nothing to do with it (and can sometimes cause additional overhead if not used correctly).
override public function set data(value:Object):void
{
super.data = value;
//this was poorly optimized, so I made it
//a little better...
var imageArray:Array = (value == null || value.imageArray == null)?
null : value.imageArray as Array;
if(imageArray == null) //On first creation of the view I create the data object
{
imageArray = new Array(36); //set an array that will cache my images.
for(var i:int = 0; i<36;i++)
{
var img:Image = new Image();
img.source = 'assets/0'+i.toString()+'.png';
container.addElement(img);
imageArray[i] = img;
}
super.data = {imageArray:imageArray}
}
else//Next time use the save images
{
var n:int = imageArray.length;
for (var j:int = 0; j < n; j++)
{
container.addElement(IVisualElement(imageArray[j]));
}
}
}
EDIT
I was mistaken about when the data property is set during the view life-cycle.
Here is how it works:
So you are correct that container would be null at that point. I was going to write up an example for you, but I'm having trouble figuring out what your end goal is here. Is there a specific reason you are storing the images on the data property? I think what you might actually want to do is this:
private var _data:Object = {cache: new ContentCache()};
protected function show_clickHandler(event:MouseEvent):void
{
this.navigator.pushView(views.MyView, _data);
}
And in the view...
<s:View xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" title="MyView">
<fx:Script>
<![CDATA[
import spark.components.Image;
import spark.core.ContentCache;
override protected function createChildren():void
{
super.createChildren();
//you might want to do a sanity first check to make sure the
//data was passed in correctly...
var cache:ContentCache = ContentCache(this.data.cache);
for(var i:int = 0; i < 36; i++)
{
var img:Image = new Image();
img.contentLoader = cache;
img.source = 'assets/0' + i.toString() + '.png';
container.addElement(img);
}
}
]]>
</fx:Script>
<s:VGroup id="container" />
</s:View>

Resources