Windows Phone - update live tile from background agent with custom image - windows-phone-7

I am trying to add cloud image to album cover if the cover is loaded from internet. I am trying to do this in Background Audio agent and I think I almost got it. The problem is that I have black image in tile. Few times when testing I got cover image with my cloud image in it but mostly I get black image (and sometimes black image with cloud in it).
Can anyone help me find the problem? Thanks
private void UpdateAppTile()
{
var apptile = ShellTile.ActiveTiles.First();
if (apptile != null && _playList != null && _playList.Any())
{
var track = _playList[currentTrackNumber];
var size = 360;
Uri coverUrl;
if (track.AlbumArt.OriginalString.StartsWith("http"))
{
BitmapImage img = null;
using (AutoResetEvent are = new AutoResetEvent(false))
{
string filename = Path.GetFileNameWithoutExtension(track.AlbumArt.OriginalString);
var urlToNewCover = String.Format("http://.../{0}/{1}", filename, size);
coverUrl = new Uri(urlToNewCover, UriKind.Absolute);
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
img = new BitmapImage(coverUrl);
are.Set();
});
are.WaitOne();
var wbmp = CreateTileImageWithCloud(img);
SaveTileImage(wbmp, "/shared/shellcontent/test.jpg");
coverUrl = new Uri("isostore:/shared/shellcontent/test.jpg", UriKind.RelativeOrAbsolute);
}
}
else
{
var coverId = track.Tag.Split(',')[1];
var urlToNewCover = String.Format("http://.../{0}/{1}", coverId, size);
coverUrl = new Uri(urlToNewCover, UriKind.Absolute);
}
var appTileData = new FlipTileData
{
BackgroundImage = coverUrl,
WideBackgroundImage = coverUrl,
...
}
apptile.Update(appTileData);
}
}
public static BitmapImage LoadBitmap(string iFilename)
{
Uri imgUri = new Uri(iFilename, UriKind.Relative);
StreamResourceInfo imageResource = Application.GetResourceStream(imgUri);
BitmapImage image = new BitmapImage();
image.SetSource(imageResource.Stream);
return image;
}
private void SaveTileImage(WriteableBitmap wbmp, string filename)
{
using (var store = IsolatedStorageFile.GetUserStoreForApplication())
{
if (store.FileExists(filename))
store.DeleteFile(filename);
var stream = store.OpenFile(filename, FileMode.OpenOrCreate);
wbmp.SaveJpeg(stream, wbmp.PixelWidth, wbmp.PixelHeight, 100, 100);
stream.Close();
}
}
private WriteableBitmap CreateTileImageWithCloud(BitmapImage img)
{
Image image = null;
WriteableBitmap wbmp = null;
using (AutoResetEvent are = new AutoResetEvent(false))
{
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
image = new Image { Source = img };
Canvas.SetLeft(image, 0);
Canvas.SetTop(image, 0);
var cloud = new BitmapImage(new Uri("Assets/Images/Other/Cloud_no.png", UriKind.Relative));
var cloudImg = new Image { Source = cloud };
Canvas.SetLeft(cloudImg, 125);
Canvas.SetTop(cloudImg, 10);
var canvas = new Canvas
{
Height = 176,
Width = 176
};
canvas.Children.Add(image);
canvas.Children.Add(cloudImg);
wbmp = new WriteableBitmap(176, 176);
wbmp.Render(canvas, null);
wbmp.Invalidate();
are.Set();
});
are.WaitOne();
}
return wbmp;
}
Edit
I found little pattern in which this is working and in which not. When application is running and I called this twice (in TrackReady and SkipNext) then I very often get cover image with cloud. When I am running just background agent (without running app) I get always black image. And often first UpdateAppTile call is just black image and second it's black image with cloud. That black color is default canvas background so I guess I have problems with delay when loading cover image from url. But I am not sure how in my case use ImageOpened event and if it help.

I think that you should call Measure and Arrange after adding elements to canvas and before rendering canvas (as with other UIElements):
canvas.Measure( new Size( Width, Height ) );
canvas.Arrange( new Rect( 0, 0, Width, Height ) );

Related

.NET6 Create Thumbnail Image of 8MB Image Error

I am trying to compress a large image into a thumbnail of 600X600 in .NET 6
My code is
public static string CreateThumbnail(int maxWidth, int maxHeight, string path)
{
byte[] bytes = System.IO.File.ReadAllBytes(path);
using (System.IO.MemoryStream ms = new System.IO.MemoryStream(bytes))
{
Image image = Image.FromStream(ms);
return CreateThumbnail(maxWidth, maxHeight, image, path);
}
}
I am getting error on this line
Image image = Image.FromStream(ms);
error is System.Runtime.InteropServices.ExternalException: 'A generic error occurred in GDI+.'
Image size in 8mb, Code works fine for small images. What is the problem in code or is there any better way to create a thumbnail for large images?
Create thumbnail has this code but I get error before calling this function
private static string CreateThumbnail(int maxWidth, int maxHeight, Image image, string path)
{
//var image = System.Drawing.Image.FromStream( (path);
var ratioX = (double)maxWidth / image.Width;
var ratioY = (double)maxHeight / image.Height;
var ratio = Math.Min(ratioX, ratioY);
var newWidth = (int)(image.Width * ratio);
var newHeight = (int)(image.Height * ratio);
using (var newImage = new Bitmap(newWidth, newHeight))
{
using (Graphics thumbGraph = Graphics.FromImage(newImage))
{
thumbGraph.CompositingQuality = CompositingQuality.Default;
thumbGraph.SmoothingMode = SmoothingMode.Default;
//thumbGraph.InterpolationMode = InterpolationMode.HighQualityBicubic;
thumbGraph.DrawImage(image, 0, 0, newWidth, newHeight);
image.Dispose();
//string fileRelativePath = Path.GetFileName(path);
//newImage.Save(path, newImage.RawFormat);
SaveJpeg(path, newImage, 100);
}
}
return path;
}

Xamarin: Unable to extract central square of photo in Android/iOS

I am trying to get an image using the camera. The image is to be 256x256 and I want it to come from the centre of a photo taken using the camera on a phone. I found this code at: https://forums.xamarin.com/discussion/37647/cross-platform-crop-image-view
I am using this code for Android...
public byte[] CropPhoto(byte[] photoToCropBytes, Rectangle rectangleToCrop, double outputWidth, double outputHeight)
{
using (var photoOutputStream = new MemoryStream())
{
// Load the bitmap
var inSampleSize = CalculateInSampleSize((int)rectangleToCrop.Width, (int)rectangleToCrop.Height, (int)outputWidth, (int)outputHeight);
var options = new BitmapFactory.Options();
options.InSampleSize = inSampleSize;
//options.InPurgeable = true; see http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html
using (var photoToCropBitmap = BitmapFactory.DecodeByteArray(photoToCropBytes, 0, photoToCropBytes.Length, options))
{
var matrix = new Matrix();
var martixScale = outputWidth / rectangleToCrop.Width * inSampleSize;
matrix.PostScale((float)martixScale, (float)martixScale);
using (var photoCroppedBitmap = Bitmap.CreateBitmap(photoToCropBitmap, (int)(rectangleToCrop.X / inSampleSize), (int)(rectangleToCrop.Y / inSampleSize), (int)(rectangleToCrop.Width / inSampleSize), (int)(rectangleToCrop.Height / inSampleSize), matrix, true))
{
photoCroppedBitmap.Compress(Bitmap.CompressFormat.Jpeg, 100, photoOutputStream);
}
}
return photoOutputStream.ToArray();
}
}
public static int CalculateInSampleSize(int inputWidth, int inputHeight, int outputWidth, int outputHeight)
{
//see http://developer.android.com/training/displaying-bitmaps/load-bitmap.html
int inSampleSize = 1; //default
if (inputHeight > outputHeight || inputWidth > outputWidth) {
int halfHeight = inputHeight / 2;
int halfWidth = inputWidth / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) > outputHeight && (halfWidth / inSampleSize) > outputWidth)
{
inSampleSize *= 2;
}
}
return inSampleSize;
}
and this code for iOS...
public byte[] CropPhoto(byte[] photoToCropBytes, Xamarin.Forms.Rectangle
rectangleToCrop, double outputWidth, double outputHeight)
{
byte[] photoOutputBytes;
using (var data = NSData.FromArray(photoToCropBytes))
{
using (var photoToCropCGImage = UIImage.LoadFromData(data).CGImage)
{
//crop image
using (var photoCroppedCGImage = photoToCropCGImage.WithImageInRect(new CGRect((nfloat)rectangleToCrop.X, (nfloat)rectangleToCrop.Y, (nfloat)rectangleToCrop.Width, (nfloat)rectangleToCrop.Height)))
{
using (var photoCroppedUIImage = UIImage.FromImage(photoCroppedCGImage))
{
//create a 24bit RGB image to the output size
using (var cGBitmapContext = new CGBitmapContext(IntPtr.Zero, (int)outputWidth, (int)outputHeight, 8, (int)(4 * outputWidth), CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
var photoOutputRectangleF = new RectangleF(0f, 0f, (float)outputWidth, (float)outputHeight);
// draw the cropped photo resized
cGBitmapContext.DrawImage(photoOutputRectangleF, photoCroppedUIImage.CGImage);
//get cropped resized photo
var photoOutputUIImage = UIKit.UIImage.FromImage(cGBitmapContext.ToImage());
//convert cropped resized photo to bytes and then stream
using (var photoOutputNsData = photoOutputUIImage.AsJPEG())
{
photoOutputBytes = new Byte[photoOutputNsData.Length];
System.Runtime.InteropServices.Marshal.Copy(photoOutputNsData.Bytes, photoOutputBytes, 0, Convert.ToInt32(photoOutputNsData.Length));
}
}
}
}
}
}
return photoOutputBytes;
}
I am struggling to work out exactly what the parameters are to call the function.
Currently, I am doing the following:
double cropSize = Math.Min(DeviceDisplay.MainDisplayInfo.Width, DeviceDisplay.MainDisplayInfo.Height);
double left = (DeviceDisplay.MainDisplayInfo.Width - cropSize) / 2.0;
double top = (DeviceDisplay.MainDisplayInfo.Height - cropSize) / 2.0;
// Get a square resized and cropped from the top image as a byte[]
_imageData = mediaService.CropPhoto(_imageData, new Rectangle(left, top, cropSize, cropSize), 256, 256);
I was expecting this to crop the image to the central square (in portrait mode side length would be the width of the photo) and then scale it down to a 256x256 image. But it never picks the centre of the image.
Has anyone ever used this code and can tell me what I need to pass in for the 'rectangleToCrop' parameter?
Note: Both Android and iOS give the same image, just not the central part that I was expecting.
Here are the two routines I used:
Android:
public byte[] ResizeImageAndCropToSquare(byte[] rawPhoto, int outputSize)
{
// Create object of bitmapfactory's option method for further option use
BitmapFactory.Options options = new BitmapFactory.Options();
// InPurgeable is used to free up memory while required
options.InPurgeable = true;
// Get the original image
using (var originalImage = BitmapFactory.DecodeByteArray(rawPhoto, 0, rawPhoto.Length, options))
{
// The shortest edge will determine the size of the square image
int cropSize = Math.Min(originalImage.Width, originalImage.Height);
int left = (originalImage.Width - cropSize) / 2;
int top = (originalImage.Height - cropSize) / 2;
using (var squareImage = Bitmap.CreateBitmap(originalImage, left, top, cropSize, cropSize))
{
// Resize the square image to the correct size of an Avatar
using (var resizedImage = Bitmap.CreateScaledBitmap(squareImage, outputSize, outputSize, true))
{
// Return the raw data of the resized image
using (MemoryStream resizedImageStream = new MemoryStream())
{
// Resize the image maintaining 100% quality
resizedImage.Compress(Bitmap.CompressFormat.Png, 100, resizedImageStream);
return resizedImageStream.ToArray();
}
}
}
}
}
iOS:
private const int BitsPerComponent = 8;
public byte[] ResizeImageAndCropToSquare(byte[] rawPhoto, int outputSize)
{
using (var data = NSData.FromArray(rawPhoto))
{
using (var photoToCrop = UIImage.LoadFromData(data).CGImage)
{
nint photoWidth = photoToCrop.Width;
nint photoHeight = photoToCrop.Height;
nint cropSize = photoWidth < photoHeight ? photoWidth : photoHeight;
nint left = (photoWidth - cropSize) / 2;
nint top = (photoHeight - cropSize) / 2;
// Crop image
using (var photoCropped = photoToCrop.WithImageInRect(new CGRect(left, top, cropSize, cropSize)))
{
using (var photoCroppedUIImage = UIImage.FromImage(photoCropped))
{
// Create a 24bit RGB image of output size
using (var cGBitmapContext = new CGBitmapContext(IntPtr.Zero, outputSize, outputSize, BitsPerComponent, outputSize << 2, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst))
{
var photoOutputRectangleF = new RectangleF(0f, 0f, outputSize, outputSize);
// Draw the cropped photo resized
cGBitmapContext.DrawImage(photoOutputRectangleF, photoCroppedUIImage.CGImage);
// Get cropped resized photo
var photoOutputUIImage = UIImage.FromImage(cGBitmapContext.ToImage());
// Convert cropped resized photo to bytes and then stream
using (var photoOutputNsData = photoOutputUIImage.AsPNG())
{
var rawOutput = new byte[photoOutputNsData.Length];
Marshal.Copy(photoOutputNsData.Bytes, rawOutput, 0, Convert.ToInt32(photoOutputNsData.Length));
return rawOutput;
}
}
}
}
}
}
}

If i mirror an image with the canvas multiple times, why does the image lose quality?

mirror() {
const mirrorCanvas = document.createElement('canvas') as HTMLCanvasElement;
const clientRect = this.parentRef.nativeElement.getBoundingClientRect();
const image = new Image();
image.src = this.imageBase64;
image.onload = () => {
mirrorCanvas.width = clientRect.width;
mirrorCanvas.height = clientRect.height;
const ctx = mirrorCanvas.getContext('2d');
if (ctx) {
ctx.scale(1, -1);
ctx.drawImage(image, 0, 0, clientRect.width, clientRect.height);
this.imageBase64 = (mirrorCanvas.toDataURL(`image/${this.format}`, 1));
}
}
}
I have this code, which mirrors the image.
However if i keep mirroring this image, it will lose quality, even if i set the quality to 1.
Why is that?

How to create barcode image with ZXing.Net and ImageSharp in .Net Core 2.0

I'm trying to generate a barcode image. When I use the following code I can create a base64 string but it's giving a blank image. I checked the content is not blank or white space.
There are codes using CoreCompat.System.Drawing but I couldn't make it work because I am working in OS X environment.
Am I doing something wrong?
code:
[HtmlTargetElement("barcode")]
public class BarcodeHelper: TagHelper {
public override void Process(TagHelperContext context, TagHelperOutput output) {
var content = context.AllAttributes["content"].Value.ToString();
var alt = context.AllAttributes["alt"].Value.ToString();
var width = 250;
var height = 250;
var margin = 0;
var barcodeWriter = new ZXing.BarcodeWriterPixelData {
Format = ZXing.BarcodeFormat.CODE_128,
Options = new QrCodeEncodingOptions {
Height = height, Width = width, Margin = margin
}
};
var pixelData = barcodeWriter.Write(content);
using (var image = Image.LoadPixelData<Rgba32>(pixelData.Pixels, width, height))
{
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", width);
output.Attributes.Add("height", height);
output.Attributes.Add("alt", alt);
output.Attributes.Add("src", string.Format("data:image/png;base64,{0}", image.ToBase64String(ImageFormats.Png)));
}
}
}
There are some code snippets like below. They can write the content and easily convert the result data to base64 string. But when I call BarcodeWriter it needs a type <TOutput> which I don't know what to send. I am using ZXing.Net 0.16.2.
var writer = BarcodeWriter // BarcodeWriter without <TOutput> is missing. There is BarcodeWriter<TOutput> I can call.
{
Format = BarcodeFormat.CODE_128
}
var result = writer.write("content");
The current version (0.16.2) of the pixel data renderer uses a wrong alpha channel value. The whole barcode is transparent.
Additionally with my version of ImageSharp I had to remove the following part "data:image/png;base64,{0}", because image.ToBase64String includes this already.
Complete modified code:
[HtmlTargetElement("barcode")]
public class BarcodeHelper: TagHelper {
public override void Process(TagHelperContext context, TagHelperOutput output) {
var content = context.AllAttributes["content"].Value.ToString();
var alt = context.AllAttributes["alt"].Value.ToString();
var width = 250;
var height = 250;
var margin = 0;
var barcodeWriter = new ZXing.BarcodeWriterPixelData {
Format = ZXing.BarcodeFormat.CODE_128,
Options = new EncodingOptions {
Height = height, Width = width, Margin = margin
},
Renderer = new PixelDataRenderer {
Foreground = new PixelDataRenderer.Color(unchecked((int)0xFF000000)),
Background = new PixelDataRenderer.Color(unchecked((int)0xFFFFFFFF)),
}
};
var pixelData = barcodeWriter.Write(content);
using (var image = Image.LoadPixelData<Rgba32>(pixelData.Pixels, width, height))
{
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", pixelData.Width);
output.Attributes.Add("height", pixelData.Height);
output.Attributes.Add("alt", alt);
output.Attributes.Add("src", string.Format( image.ToBase64String(ImageFormats.Png)));
}
}
}
It's also possible to use the ImageSharp binding package (ZXing.Net.Bindings.ImageSharp).
[HtmlTargetElement("barcode")]
public class BarcodeHelper: TagHelper {
public override void Process(TagHelperContext context, TagHelperOutput output) {
var content = context.AllAttributes["content"].Value.ToString();
var alt = context.AllAttributes["alt"].Value.ToString();
var width = 250;
var height = 250;
var margin = 0;
var barcodeWriter = new ZXing.ImageSharp.BarcodeWriter<Rgba32> {
Format = ZXing.BarcodeFormat.CODE_128,
Options = new EncodingOptions {
Height = height, Width = width, Margin = margin
}
};
using (var image = barcodeWriter.Write(content))
{
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", image.Width);
output.Attributes.Add("height", image.Height);
output.Attributes.Add("alt", alt);
output.Attributes.Add("src", string.Format( image.ToBase64String(ImageFormats.Png)));
}
}
}

Resizing dynamically loaded images Flash AS3

I have a series of banners (standard sizes) which all need to load the same corresponding image for each slide. I can load them fine but I want the image to match the size of the container MC that the image is being loaded to, is that possible? Either that or to set the height/width manually...
Everything I have tried doesnt work, here is the code for the working state (where it just loads the image which stretches across the stage)
Code:
var myImage:String = dynamicContent.Profile[0].propImage.Url;
function myImageLoader(file:String, x:Number, y:Number):StudioLoader{
var myImageLoader:StudioLoader = new StudioLoader();
var request:URLRequest = new URLRequest(file);
myImageLoader.load(request);
myImageLoader.x = -52;
myImageLoader.y =-30;
return myImageLoader;
}
propImage1.addChild(loadImage(enabler.getUrl(myImage),-20,0));
You can resize your loaded image after the Event.COMPLETE on the LoaderInfo (contentLoaderInfo) of your URLLoader is fired, so you can do like this :
var request:URLRequest = new URLRequest('http://www.example.com/image.jpg');
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, on_loadComplete);
function on_loadComplete(e:Event):void {
var image:DisplayObject = loader.content;
image.x = 100;
image.y = 100;
image.width = 300;
image.height = 200;
addChild(image);
}
loader.load(request);
Edit :
load_image('http://www.example.com/image.jpg');
function load_image(url){
var request:URLRequest = new URLRequest(url);
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, on_loadComplete);
function on_loadComplete(e:Event):void {
add_image(loader.content);
}
loader.load(request);
}
function add_image(image:DisplayObject, _x:int = 0, _y:int = 0, _width:int = 100, _height:int = 100){
image.x = _x;
image.y = _y;
image.width = _width;
image.height = _height;
addChild(image);
}
Hope that can help.

Resources