WriteableBitmap not invalidating and rendering in Windows Phone but working in silverlight5? - windows-phone-7

I am having a property which returns WriteableBitmap. when i am binding that property to the silverlight5 Image it showing the image, but when i am doing that in the WP it is not showing the image. when i am googling around this issue i saw that in WP the raw pixel values did not have alpha bits is set. But the same is working with silverlight. I don't know what is happening. Anybody have the similar issue or any round about?
(Imageproperty as WriteableBitmap).Invalidate();
<Image Source="{Binding Imageproperty}"/> (this is working in silverlight not in WP(7.1))

I had a similar sort of issue when trying to port some of the FaceLight code across from Silverlight to Windows Phone. The easiest way around this would be to manually set the Alpha value to 255 / opaque. So say if you had your WriteableBitmap that you wanted to display, result:
//convert to byte array
int stride = result.PixelWidth * 4; //brga32 is 32
int bytes = Math.Abs(stride) * result.PixelHeight;
byte[] data = result.ToByteArray();
int dataIndex = 0;
int nOffset = stride - result.PixelWidth * 4;
for (int y = 0; y < result.PixelHeight; ++y)
{
for (int x = 0; x < result.PixelWidth; ++x)
{
data[dataIndex + 3] = 0xFF; //set alpha to 255
dataIndex += 4; //skip to next pixel
}
dataIndex += nOffset;
}
WriteableBitmap finalImg = new WriteableBitmap(input.PixelWidth, input.PixelHeight);
return finalImg.FromByteArray(data);
Displaying the result from this method (the finalImg.FromByteArray(data) call) should display properly on the phone.
An alternative to this method above as well, is writing the WriteableBitmap to a .jpeg and then display the .jpeg instead - that seemed to work for me also but I didn't investigate that too thoroughly.

If the writeablebitmap has its pixeldata set or changed after the binding is established you need to call Invalidate to cause an update of the screen.
This goes for both Silverlight and Phone but you might have a race condition here that runs differently on both platforms.

I know it's an old post, but today I encountered the same problem on Windows Phone 8.1 Silverlight and I found nice solution, so I have decided to left it for people with similar problem. It was posted by Charles Petzold on MSDN as Video Feeds on Windows Phone 7 (it is shown on VideoSink class example, but it shouldn't be a problem to reproduce it with other case). He created a simple class that derives from VideoSink:
SimpleVideoSink C# code:
public class SimpleVideoSink : VideoSink
{
VideoFormat videoFormat;
WriteableBitmap writeableBitmap;
Action<WriteableBitmap> action;
public SimpleVideoSink(Action<WriteableBitmap> action)
{
this.action = action;
}
protected override void OnCaptureStarted() { }
protected override void OnCaptureStopped() { }
protected override void OnFormatChange(VideoFormat videoFormat)
{
this.videoFormat = videoFormat;
System.Windows.Deployment.Current.Dispatcher.BeginInvoke(() =>
{
writeableBitmap = new WriteableBitmap(videoFormat.PixelWidth,
videoFormat.PixelHeight);
action(writeableBitmap);
});
}
protected override void OnSample(long sampleTimeInHundredNanoseconds,
long frameDurationInHundredNanoseconds, byte[] sampleData)
{
if (writeableBitmap == null)
return;
int baseIndex = 0;
for (int row = 0; row < writeableBitmap.PixelHeight; row++)
{
for (int col = 0; col < writeableBitmap.PixelWidth; col++)
{
int pixel = 0;
if (videoFormat.PixelFormat == PixelFormatType.Format8bppGrayscale)
{
byte grayShade = sampleData[baseIndex + col];
pixel = (int)grayShade | (grayShade << 8) |
(grayShade << 16) | (0xFF << 24);
}
else
{
int index = baseIndex + 4 * col;
pixel = (int)sampleData[index + 0] | (sampleData[index + 1] << 8) |
(sampleData[index + 2] << 16) | (sampleData[index + 3] << 24);
}
writeableBitmap.Pixels[row * writeableBitmap.PixelWidth + col] = pixel;
}
baseIndex += videoFormat.Stride;
}
writeableBitmap.Dispatcher.BeginInvoke(() =>
{
writeableBitmap.Invalidate();
});
}
}
However, this code needs some modification - VideoSink.CaptureSource must be provided with our CaptureSource (I just passed it into constructor):
public SimpleVideoSink(CaptureSource captureSource, Action<WriteableBitmap> action)
{
base.CaptureSource = captureSource;
this.action = action;
}
When we initialize SimpleVideoSink class in a ViewModel, we have to provide it an Action parameter. In my case it was enough to provide ViewModel with initialized field writeableBitmap:
ViewModel C# code:
private WriteableBitmap videoWriteableBitmap;
public WriteableBitmap VideoWriteableBitmap
{
get
{
return videoWriteableBitmap;
}
set
{
videoWriteableBitmap = value;
RaisePropertyChanged("VideoWriteableBitmap");
}
}
private void OnWriteableBitmapChange(WriteableBitmap writeableBitmap)
{
VideoWriteableBitmap = writeableBitmap;
}
//this part goes to constructor/method
SimpleVideoSink videoFrameHandler = new SimpleVideoSink(captureSource, OnWriteableBitmapChange);
Then all we have to do is to bind it to View:
View XAML code:
<Image Source="{Binding VideoWriteableBitmap}" />
In this example Image source is refreshed on every OnSample method invocation, and is dispatched to main thread through WriteableBitmap.Dispatcher.
This solution generates proper image with no blank pixel (alpha channel is also filled), and Invalidate() method works as it should.

Related

ClipBoard What is an Object Descriptor type and how can i write it?

So I made a small application that basicaly draw a whatever image is in the ClipBoard(memory) and trys to draw it.
This is a sample of the code:
private EventHandler<KeyEvent> copyPasteEvent = new EventHandler() {
final KeyCombination ctrl_V = new KeyCodeCombination(KeyCode.V, KeyCombination.CONTROL_DOWN);
#Override
public void handle(Event event) {
if (ctrl_V.match((KeyEvent) event)) {
System.out.println("Ctrl+V pressed");
Clipboard clipboard = Clipboard.getSystemClipboard();
System.out.println(clipboard.getContentTypes());
//Change canvas size if necessary to allow space for the image to fit
Image copiedImage = clipboard.getImage();
if (copiedImage.getHeight()>canvas.getHeight()){
canvas.setHeight(copiedImage.getHeight());
}
if (copiedImage.getWidth()>canvas.getWidth()){
canvas.setWidth(copiedImage.getWidth());
}
gc.drawImage(clipboard.getImage(), 0,0);
}
}
};
This is the image that was drawn and the correspecting data type:
A print from my screen.
A image from the internet.
However when i copy and paste a direct raw image from paint...
Object Descriptor is an OLE format from Microsoft.
This is why when you copy an image from a Microsoft application, you get these descriptors from Clipboard.getSystemClipboard().getContentTypes():
[[application/x-java-rawimage], [Object Descriptor]]
As for getting the image out of the clipboard... let's try two possible ways to do it: AWT and JavaFX.
AWT
Let's use the awt toolkit to get the system clipboard, and in case we have an image on it, retrieve a BufferedImage. Then we can convert it easily to a JavaFX Image and place it in an ImageView:
try {
DataFlavor[] availableDataFlavors = Toolkit.getDefaultToolkit().
getSystemClipboard().getAvailableDataFlavors();
for (DataFlavor f : availableDataFlavors) {
System.out.println("AWT Flavor: " + f);
if (f.equals(DataFlavor.imageFlavor)) {
BufferedImage data = (BufferedImage) Toolkit.getDefaultToolkit().getSystemClipboard().getData(DataFlavor.imageFlavor);
System.out.println("data " + data);
// Convert to JavaFX:
WritableImage img = new WritableImage(data.getWidth(), data.getHeight());
SwingFXUtils.toFXImage((BufferedImage) data, img);
imageView.setImage(img);
}
}
} catch (UnsupportedFlavorException | IOException ex) {
System.out.println("Error " + ex);
}
It prints:
AWT Flavor: java.awt.datatransfer.DataFlavor[mimetype=image/x-java-image;representationclass=java.awt.Image]
data BufferedImage#3e4eca95: type = 1 DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0 IntegerInterleavedRaster: width = 350 height = 364 #Bands = 3 xOff = 0 yOff = 0 dataOffset[0] 0
and displays your image:
This part was based in this answer.
JavaFX
Why didn't we try it with JavaFX in the first place? Well, we could have tried directly:
Image content = (Image) Clipboard.getSystemClipboard().getContent(DataFormat.IMAGE);
imageView.setImage(content);
and you will get a valid image, but when adding it to an ImageView, it will be blank as you already noticed, or with invalid colors.
So how can we get a valid image? If you check the BufferedImage above, it shows type = 1, which means BufferedImage.TYPE_INT_RGB = 1;, in other words, it is an image with 8-bit RGB color components packed into integer pixels, without alpha component.
My guess is that JavaFX implementation for Windows doesn't process correctly this image format, as it probably expects a RGBA format. You can check here how the image is extracted. And if you want to dive into the native implementation, check the native-glass/win/GlassClipboard.cpp code.
So we can try to do it with a PixelReader. Let's read the image and return a byte array:
private byte[] imageToData(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] data = new byte[width * height * 3];
int i = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int argb = image.getPixelReader().getArgb(x, y);
int r = (argb >> 16) & 0xFF;
int g = (argb >> 8) & 0xFF;
int b = argb & 0xFF;
data[i++] = (byte) r;
data[i++] = (byte) g;
data[i++] = (byte) b;
}
}
return data;
}
Now, all we need to do is use this byte array to write a new image and set it to the ImageView:
Image content = (Image) Clipboard.getSystemClipboard().getContent(DataFormat.IMAGE);
byte[] data = imageToData(content);
WritableImage writableImage = new WritableImage((int) content.getWidth(), (int) content.getHeight());
PixelWriter pixelWriter = writableImage.getPixelWriter();
pixelWriter.setPixels(0, 0, (int) content.getWidth(), (int) content.getHeight(),
PixelFormat.getByteRgbInstance(), data, 0, (int) content.getWidth() * 3);
imageView.setImage(writableImage);
And now you will get the same result, but only using JavaFX:

XNA 4.0 Spritbatch - render target must not be set on the device

i am converting a XNA 3.0 cloud effect into XNA 4.0 but i get an error
"The render target must not be set on the device when it is used as a texture."
It happens in the second loop in this line:
mover.Parameters["PressureMap"].SetValue(PressureOffsets);
Code:
for (int i = 0; i < 10; i++)
{
graf.SetRenderTarget(rt5);
mover.Parameters["PressureMap"].SetValue(PressureOffsets);
mover.Parameters["DivergenceMap"].SetValue(Divergence);
mover.CurrentTechnique = mover.Techniques["Jacobi"];
mover.CurrentTechnique.Passes[0].Apply();
sp.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointWrap, DepthStencilState.DepthRead, RasterizerState.CullNone);
sp.Draw(Velocity, new Vector2(0, 0), Color.White);
sp.End();
graf.SetRenderTarget(null);
PressureOffsets = rt5;
}
It seems I cant set in XNA 4.0 an texture as an effect parameter, if its already set as rendertarget.
But I dont know how to convert this to work in XNA 4.0 :(
I solved it.
Its necessary to change the rendertarget with a new one.
I used a helper Class "RenderTargetState".
public RenderTargetState SetToDevice()
{
oldState = new RenderTargetState(GraphicsDevice);
GraphicsDevice.SetRenderTarget(new RenderTarget2D(GraphicsDevice, renderTargetWidth, renderTargetHeight));
return oldState;
}
public RenderTargetState BeginRenderToTexture()
{
oldState = SetToDevice();
return oldState;
}
public Texture2D EndRenderGetTexture()
{
RenderTargetState renderBuffer = oldState.SetToDevice();
oldState = null;
return renderBuffer.RenderTarget;
}
So the code looks like this now:
for (int i = 0; i < 10; i++)
{
rt5.BeginRenderToTexture();
graf.SetRenderTarget(rt5.RenderTarget);
mover.Parameters["PressureMap"].SetValue(PressureOffsets);
mover.Parameters["DivergenceMap"].SetValue(Divergence);
mover.CurrentTechnique = mover.Techniques["Jacobi"];
mover.CurrentTechnique.Passes[0].Apply();
sp.Begin(SpriteSortMode.Deferred, BlendState.Opaque);
sp.Draw(Velocity, new Vector2(0, 0), Color.White);
sp.End();
graf.SetRenderTarget(null);
PressureOffsets = rt5.EndRenderGetTexture();
}
There's an easier way to do this.
rt5 = PressureOffsets as RenderTarget2D
graf.SetRenderTarget(null);
PressureOffsets = rt5;

How to run processing script on multiple frames in a folder

Using processing I am trying to run a script that will process a folder full of frames.
The script is a combination of PixelSortFrames and SortThroughSeamCarving.
I am new to processing and what I want does not seems to be working. I would like the script to run back through and choose the following file in the folder to be processed. At the moment it stops at the end and does not return to start on next file (there are three other modules also involved).
Any help would be much appreciated. :(
/* ASDFPixelSort for video frames v1.0
Original ASDFPixelSort by Kim Asendorf <http://kimasendorf.com>
https://github.com/kimasendorf/ASDFPixelSort
Fork by dx <http://dequis.org> and chinatsu <http://360nosco.pe>
// Main configuration
String basedir = ".../Images/Seq_002"; // Specify the directory in which the frames are located. Use forward slashes.
String fileext = ".jpg"; // Change to the format your images are in.
int resumeprocess = 0; // If you wish to resume a previously stopped process, change this value.
boolean reverseIt = true;
boolean saveIt = true;
int mode = 2; // MODE: 0 = black, 1 = bright, 2 = white
int blackValue = -10000000;
int brightnessValue = -1;
int whiteValue = -6000000;
// -------
PImage img, original;
float[][] sums;
int bottomIndex = 0;
String[] filenames;
int row = 0;
int column = 0;
int i = 0;
java.io.File folder = new java.io.File(dataPath(basedir));
java.io.FilenameFilter extfilter = new java.io.FilenameFilter() {
boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(fileext);
}
};
void setup() {
if (resumeprocess > 0) {i = resumeprocess - 1;frameCount = i;}
size(1504, 1000); // Resolution of the frames. It's likely there's a better way of doing this..
filenames = folder.list(extfilter);
size(1504, 1000);
println(" " + width + " x " + height + " px");
println("Creating buffer images...");
PImage hImg = createImage(1504, 1000, RGB);
PImage vImg = createImage(1504, 1000, RGB);
// draw image and convert to grayscale
if (i +1 > filenames.length) {println("Uh.. Done!"); System.exit(0);}
img = loadImage(basedir+"/"+filenames[i]);
original = loadImage(basedir+"/"+filenames[i]);
image(img, 0, 0);
filter(GRAY);
img.loadPixels(); // updatePixels is in the 'runKernals'
// run kernels to create "energy map"
println("Running kernals on image...");
runKernels(hImg, vImg);
image(img, 0, 0);
// sum pathways through the image
println("Getting sums through image...");
sums = getSumsThroughImage();
image(img, 0, 0);
loadPixels();
// get start point (smallest value) - this is used to find the
// best seam (starting at the lowest energy)
bottomIndex = width/2;
// bottomIndex = findStartPoint(sums, 50);
println("Bottom index: " + bottomIndex);
// find the pathway with the lowest information
int[] path = new int[height];
path = findPath(bottomIndex, sums, path);
for (int bi=0; bi<width; bi++) {
// get the pixels of the path from the original image
original.loadPixels();
color[] c = new color[path.length]; // create array of the seam's color values
for (int i=0; i<c.length; i++) {
try {
c[i] = original.pixels[i*width + path[i] + bi]; // set color array to values from original image
}
catch (Exception e) {
// when we run out of pixels, just ignore
}
}
println(" " + bi);
c = sort(c); // sort (use better algorithm later)
if (reverseIt) {
c = reverse(c);
}
for (int i=0; i<c.length; i++) {
try {
original.pixels[i*width + path[i] + bi] = c[i]; // reverse! set the pixels of the original from sorted array
}
catch (Exception e) {
// when we run out of pixels, just ignore
}
}
original.updatePixels();
}
// when done, update pixels to display
updatePixels();
// display the result!
image(original, 0, 0);
if (saveIt) {
println("Saving file...");
//filenames = stripFileExtension(filenames);
save("results/SeamSort_" + filenames + ".tiff");
}
println("DONE!");
}
// strip file extension for saving and renaming
String stripFileExtension(String s) {
s = s.substring(s.lastIndexOf('/')+1, s.length());
s = s.substring(s.lastIndexOf('\\')+1, s.length());
s = s.substring(0, s.lastIndexOf('.'));
return s;
}
This code works by processing all images in the selected folder
String basedir = "D:/things/pixelsortframes"; // Specify the directory in which the frames are located. Use forward slashes.
String fileext = ".png"; // Change to the format your images are in.
int resumeprocess = 0; // If you wish to resume a previously stopped process, change this value.
int mode = 1; // MODE: 0 = black, 1 = bright, 2 = white
int blackValue = -10000000;
int brightnessValue = -1;
int whiteValue = -6000000;
PImage img;
String[] filenames;
int row = 0;
int column = 0;
int i = 0;
java.io.File folder = new java.io.File(dataPath(basedir));
java.io.FilenameFilter extfilter = new java.io.FilenameFilter() {
boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(fileext);
}
};
void setup() {
if (resumeprocess > 0) {i = resumeprocess - 1;frameCount = i;}
size(1920, 1080); // Resolution of the frames. It's likely there's a better way of doing this..
filenames = folder.list(extfilter);
}
void draw() {
if (i +1 > filenames.length) {println("Uh.. Done!"); System.exit(0);}
row = 0;
column = 0;
img = loadImage(basedir+"/"+filenames[i]);
image(img,0,0);
while(column < width-1) {
img.loadPixels();
sortColumn();
column++;
img.updatePixels();
}
while(row < height-1) {
img.loadPixels();
sortRow();
row++;
img.updatePixels();
}
image(img,0,0);
saveFrame(basedir+"/out/"+filenames[i]);
println("Frames processed: "+frameCount+"/"+filenames.length);
i++;
}
essentially I want to do the same thing only with a different image process but my code is not doing this to all with in the folder... just one file.
You seem to be confused about what the setup() function does. It runs once, and only once, at the beginning of your code's execution. You don't have any looping structure for processing the other files, so it's no wonder that it only processes the first one. Perhaps wrap the entire thing in a for loop? It looks like you kind of thought about this, judging by the global variable i, but you never increment it to go to the next image and you overwrite its value in several for loops later anyway.

In Selenium, how to compare images?

As i have been asked to automate our Company's website using Selenium Automation tooL.
But i am new to Selenium tool to proceed with, but i have learnt the basics of Selenium IDE and RC. But i am very much confused with how to compare actual and original images as we usually do in other automation tools. How do we come to a result that there bug in the website? Its obviously through image comparison but i wonder as selenium is one of the very popular tools but it doesn't have image comparing option. On the other hand i doubt whether my way of proceeding with the automation process is correct! Could somebody please help me out..
Thanks in Advance!!
Sanjay S
I had simillar task. I needed to compare more than 3000 images on a WebPage.
First of all I scrolled page to load all images:
public void compareImage() throws InterruptedException {
driver.get(baseUrl);
driver.manage().window().maximize();
JavascriptExecutor executor = (JavascriptExecutor) driver;
Long previousHeight;
Long currentHeight;
do {
previousHeight = (Long) executor.executeScript("return document.documentElement.scrollHeight");
executor.executeScript("window.scrollBy(0, document.documentElement.scrollHeight)");
Thread.sleep(500);
currentHeight = (Long) executor.executeScript("return document.documentElement.scrollHeight");
} while (Long.compare(previousHeight, currentHeight) != 0);
after I compared size of all images with first image(or you can just write size):
List<WebElement> images = driver.findElements(By.cssSelector("img[class='playable']"));
List<String> errors = new LinkedList<>();
int imgWidth, imgHeight, elWidth, elHeight;
int imgNum = 0;
imgWidth = images.get(0).getSize().getWidth();
imgHeight = images.get(0).getSize().getHeight();
for (WebElement el : images) {
imgNum++;
elWidth = el.getSize().getWidth();
elHeight = el.getSize().getHeight();
if (imgWidth != elWidth || imgHeight != elHeight) {
errors.add(String.format("Picture # %d has incorrect size (%d : %d) px"
, imgNum, elWidth, elHeight));
}
}
for (String str : errors)
System.out.println(str);
if (errors.size() == 0)
System.out.println("All images have the same size");
}
Since you mention knowledge about Selenium RC, you can easily extend Selenium's capability using a library for your chosen programming language. For instance, in Java you can use the PixelGrabber class for comparing two images and assert their match.
imagemagick and imagediff are also two good tools to use for image matching. You would require Selenium RC and a programming language knowledge to work with it.
Image comparison on C#. To get exact results I recommend to disable anti aliasing browser feature before taking screenshots, otherwise pixels each time are a little bit different drawn. For example HTML canvas element options.AddArgument("disable-canvas-aa");
private static bool ImageCompare(Bitmap bmp1, Bitmap bmp2, Double TolerasnceInPercent)
{
bool equals = true;
bool flag = true; //Inner loop isn't broken
//Test to see if we have the same size of image
if (bmp1.Size == bmp2.Size)
{
for (int x = 0; x < bmp1.Width; ++x)
{
for (int y = 0; y < bmp1.Height; ++y)
{
Color Bitmap1 = bmp1.GetPixel(x, y);
Color Bitmap2 = bmp2.GetPixel(x, y);
if (Bitmap1.A != Bitmap2.A)
{
if (!CalculateTolerance(Bitmap1.A, Bitmap2.A, TolerasnceInPercent))
{
flag = false;
equals = false;
break;
}
}
if (Bitmap1.R != Bitmap2.R)
{
if (!CalculateTolerance(Bitmap1.R, Bitmap2.R, TolerasnceInPercent))
{
flag = false;
equals = false;
break;
}
}
if (Bitmap1.G != Bitmap2.G)
{
if (!CalculateTolerance(Bitmap1.G, Bitmap2.G, TolerasnceInPercent))
{
flag = false;
equals = false;
break;
}
}
if (Bitmap1.B != Bitmap2.B)
{
if (!CalculateTolerance(Bitmap1.B, Bitmap2.B, TolerasnceInPercent))
{
flag = false;
equals = false;
break;
}
}
}
if (!flag)
{
break;
}
}
}
else
{
equals = false;
}
return equals;
}
This C# function calculates tolerance
private static bool CalculateTolerance(Byte FirstImagePixel, Byte SecondImagePixel, Double TolerasnceInPercent)
{
double OneHundredPercent;
double DifferencesInPix;
double DifferencesPercentage;
if (FirstImagePixel > SecondImagePixel)
{
OneHundredPercent = FirstImagePixel;
}
else
{
OneHundredPercent = SecondImagePixel;
}
if (FirstImagePixel > SecondImagePixel)
{
DifferencesInPix = FirstImagePixel - SecondImagePixel;
}
else
{
DifferencesInPix = SecondImagePixel - FirstImagePixel;
}
DifferencesPercentage = (DifferencesInPix * 100) / OneHundredPercent;
DifferencesPercentage = Math.Round(DifferencesPercentage, 2);
if (DifferencesPercentage > TolerasnceInPercent)
{
return false;
}
return true;
}

show image in picture gallery in windows phone 7

I have an image app in wp7.
class Images
{
public string Title {get;set;}
public string Path {get;set;}
}
on page level, i bind title and path(relative to my app) it to a list.
What i need is, when user click on list item the respective image open in picture gallery of windows phone 7.
You should clarify your question, but I suppose Path is the location of your image in isolated storage. Providing that Image is the name of your Image in xaml
img.Source = GetImage(LoadIfExists(image.Path));
LoadIfExists returns the binary data for a file in Isolated Storage, and GetImage returns it as a WriteableBitmap :
public static WriteableBitmap GetImage(byte[] buffer)
{
int width = buffer[0] * 256 + buffer[1];
int height = buffer[2] * 256 + buffer[3];
long matrixSize = width * height;
WriteableBitmap retVal = new WriteableBitmap(width, height);
int bufferPos = 4;
for (int matrixPos = 0; matrixPos < matrixSize; matrixPos++)
{
int pixel = buffer[bufferPos++];
pixel = pixel << 8 | buffer[bufferPos++];
pixel = pixel << 8 | buffer[bufferPos++];
pixel = pixel << 8 | buffer[bufferPos++];
retVal.Pixels[matrixPos] = pixel;
}
return retVal;
}
public static byte[] LoadIfExists(string fileName)
{
byte[] retVal;
using (IsolatedStorageFile iso = IsolatedStorageFile.GetUserStoreForApplication())
{
if (iso.FileExists(fileName))
{
using (IsolatedStorageFileStream stream = iso.OpenFile(fileName, FileMode.Open))
{
retVal = new byte[stream.Length];
stream.Read(retVal, 0, retVal.Length);
}
}
else
{
retVal = new byte[0];
}
}
return retVal;
}
If you want to write the image into the Picture Library, it's basically the same process, ending by calling SavePictureToCameraRoll() of MediaLibrary as explained on this MSDN Article

Resources