Is it possible to save a generated image in Codename One? - image

My question is related to this previous question. What I want to achieve is to stack images (they have transparency), write a string on top, and save the photomontage / photocollage with full resolution.
#Override
protected void beforeMain(Form f) {
Image photoBase = fetchResourceFile().getImage("Voiture_4_3.jpg");
Image watermark = fetchResourceFile().getImage("Watermark.png");
f.setLayout(new LayeredLayout());
final Label drawing = new Label();
f.addComponent(drawing);
// Image mutable dans laquelle on va dessiner (fond blanc)
Image mutableImage = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
drawing.getUnselectedStyle().setBgImage(mutableImage);
drawing.getUnselectedStyle().setBackgroundType(Style.BACKGROUND_IMAGE_SCALED_FIT);
// Paint all the stuff
paints(mutableImage.getGraphics(), photoBase, watermark, photoBase.getWidth(), photoBase.getHeight());
// Save the collage
Image screenshot = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
f.revalidate();
f.setVisible(true);
drawing.paintComponent(screenshot.getGraphics(), true);
String imageFile = FileSystemStorage.getInstance().getAppHomePath() + "screenshot.png";
try(OutputStream os = FileSystemStorage.getInstance().openOutputStream(imageFile)) {
ImageIO.getImageIO().save(screenshot, os, ImageIO.FORMAT_PNG, 1);
} catch(IOException err) {
err.printStackTrace();
}
}
public void paints(Graphics g, Image background, Image watermark, int width, int height) {
g.drawImage(background, 0, 0);
g.drawImage(watermark, 0, 0);
g.setColor(0xFF0000);
// Upper left corner
g.fillRect(0, 0, 10, 10);
// Lower right corner
g.setColor(0x00FF00);
g.fillRect(width - 10, height - 10, 10, 10);
g.setColor(0xFF0000);
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(220, Font.STYLE_BOLD);
g.setFont(f);
// Draw a string right below the M from Mercedes on the car windscreen (measured in Gimp)
g.drawString("HelloWorld",
(int) (848 ),
(int) (610)
);
}
This is the saved screenshot I get if I use the Iphone6 skin (the payload image is smaller than the original one and is centered). If I use the Xoom skin this is what I get (the payload image is still smaller than the original image but it has moved to the left).
So to sum it all up : why is the saved screenshot with Xoom skin different from the one I get with Iphone skin ? Is there anyway to directly save the graphics on which I paint in the paints method so that the saved image would have the original dimensions ?
Thanks a lot to anyone that could help me :-)!
Cheers,

You can save an image in Codename one using the ImageIO class. Notice that you can draw a container hierarchy into a mutable image using the paintComponent(Graphics) method.
You can do both approaches with draw image on mutable or via layouts. Personally I always prefer layouts as I like the abstraction but I wouldn't say the mutable image approach is right/wrong.
Notice that if you change/repaint a lot then mutable images are slower (this will not be noticeable for regular code or on the simulator) as they are forced to use the software renderer and can't use the GPU fully.
In the previous question it seems you placed the image with a "FIT" style which naturally drew it smaller than the containing container and then drew the image on top of it manually... This is problematic.
One solution is to draw everything manually but then you will need to do the "fit" aspect of drawing yourself. If you use layouts you should position everything based on the layouts including your drawing/text.

Related

Why X/Y coordinators of image in pdf are always ZERO by GetImageCTM?

I am searching for a long time on net. But no use.
Please help or try to give some ideas how to achieve this.
Any help will be appreciated.
I get the Matrix by ImageRenderInfo#GetImageCTM(), and get the X/Y coordiantors.
but it is always 0!
I have tried these api, GetStartPoint(), and GetImageCTM.
However, the X/Y coordinator is always 0 :-(
Note: I have some images in pdf in some positons(not the (0,0) coordinator).
void IRenderListener.RenderImage(ImageRenderInfo imgRenderInfo)
{
Matrix mtx = imgRenderInfo.GetImageCTM();
// x, y
float[] coordinate = new float[] { mtx[Matrix.I31], mtx[Matrix.I32] };
// Why the coordinate[0] and coordinate[1]
// are always be ZERO regardless the positon in Pdf
}
Answer to the question as is
I parsed the page contents of an arbitrary file I had at hands (the example PDF from this question) using a render listener with your implementation code plus output of the coordinates:
public void ExtractImageCoordinatesFromArchmodels()
{
using (PdfReader reader = new PdfReader(#"EVERMOTION ARCHMODELS VOL.78.pdf"))
{
PdfReaderContentParser parser = new PdfReaderContentParser(reader);
ImageCoordinatesRenderListener listener = new ImageCoordinatesRenderListener();
for (var i = 1; i <= reader.NumberOfPages; i++)
{
parser.ProcessContent(i, listener);
}
}
}
internal class ImageCoordinatesRenderListener : IRenderListener
{
public void BeginTextBlock()
{ }
public void EndTextBlock()
{ }
public void RenderText(TextRenderInfo renderInfo)
{ }
public void RenderImage(ImageRenderInfo renderInfo)
{
Matrix mtx = renderInfo.GetImageCTM();
// x, y
float[] coordinate = new float[] { mtx[Matrix.I31], mtx[Matrix.I32] };
Console.WriteLine("Image at {0}, {1}.", coordinate[0], coordinate[1]);
}
}
and the output was
Image at 6,00029, 52,15466.
Image at 19,84251, 363,4501.
Image at 294,091, 361,5604.
Image at 300,0336, 81,089.
Image at 15,59055, 72,94052.
Image at 5,322647, 340,7029.
Image at 288,5311, 386,0621.
Image at 291,7613, 69,35573.
Image at 28,50845, 53,13286.
Image at 41,2021, 380,3172.
Image at 290,8796, 368,9564.
Image at 295,8532, 50,71478.
Image at 19,13385, 49,21146.
Image at 25,5118, 385,9343.
Image at 282,4584, 379,8427.
Image at 293,5927, 65,19702.
Image at 4,535416, 60,35075.
Image at 3,364258, 374,4344.
Image at 288,0557, 373,5591.
Image at 299,9102, 59,13971.
Image at 11,33858, 66,10181.
Image at 11,66959, 380,3134.
Image at 297,1836, 378,4615.
Image at 299,9689, 66,74164.
Image at 10,62991, 53,18137.
Image at 5,180252, 377,7065.
Image at 279,9567, 377,9544.
Image at 289,9219, 69,23323.
Image at 6,400314, 68,17795.
Image at 11,33858, 361,2458.
Image at 297,1935, 373,4553.
Image at 299,8854, 68,30142.
Image at 7,086609, 68,13367.
Image at 3,82518, 352,3451.
Image at 287,9208, 373,4846.
Image at 294,6425, 68,3132.
Image at 41,2271, 68,15968.
Image at 5,709488, 356,2161.
Image at 304,9857, 373,593.
Image at 282,4557, 48,97745.
Image at 5,669281, 53,65367.
Image at 27,34265, 382,0123.
Image at 297,1409, 373,494.
Image at 300,0584, 50,23624.
Image at 7,245102, 68,23528.
Image at 8,503922, 380,1963.
Image at 290,1901, 355,9355.
Image at 287,2598, 60,53516.
Image at 5,102356, 68,01541.
Image at 17,00786, 378,9057.
Image at 296,8928, 373,5667.
Image at 299,9655, 68,04535.
(My locale uses a comma as decimal separator.)
So I cannot reproduce your claim
the X/Y coordinator is always 0
Thus, what you observed is either due to some issue of your remaining code or something special about all your test PDFs; probably they simply indeed all have their images positioned at 0,0
Clarifications from comments
Meanwhile the OP has clarified in comments that the images of interest are located in annotation appearance streams, not in the page content stream.
Coordinates therein are in the respective annotation appearance stream coordinate system which is implied by the appearance's bounding box (its BBox entry). This bounding box then optionally is transformed by the appearance matrix (its Matrix entry). The resulting four sided area then is scaled and moved into the annotation's rectangle (its Rect entry). And depending on page rotation and annotation properties, this rectangle may be rotated by a multiple of 90° relative to the page coordinates.
Thus, a general solution transforming those coordinates into the default user space coordinate system of the page requires some math.
Often, though, bitmaps in annotation appearances are filling the bounding box (nearly) completely. Often there is no appearance matrix. And often annotations rotate with the page.
Thus, an often good approximation is to simply use the annotation rectangle. This also is what the OP now uses.

How can I get rid of artifacts in ImageSource created with SkiaSharp

I created an app in which I want to display text on top of google maps. I chose to use custom markers, but they can only be images, so I decided to create an image from my text utilizing SkiaSharp.
private static ImageSource CreateImageSource(string text)
{
int numberSize = 20;
int margin = 5;
SKBitmap bitmap = new SKBitmap(30, numberSize + margin * 2, SKImageInfo.PlatformColorType, SKAlphaType.Premul);
SKCanvas canvas = new SKCanvas(bitmap);
SKPaint paint = new SKPaint
{
Style = SKPaintStyle.StrokeAndFill,
TextSize = numberSize,
Color = SKColors.Red,
StrokeWidth = 1,
};
canvas.DrawText(text.ToString(), 0, numberSize, paint);
SKImage skImage = SKImage.FromBitmap(bitmap);
SKData data = skImage.Encode(SKEncodedImageFormat.Png, 100);
return ImageSource.FromStream(data.AsStream);
}
The images I create however have ugly artifacts on the top of the resulting image and my feeling is that they get worse if I create multiple images.
I built an example app, that shows the artifacts and the code I used to draw the text. It can be found here:
https://github.com/hot33331/SkiaSharpExample
How can I get rid of those artifacts. Am I using skia wrong?
I got the following answer from Matthew Leibowitz on the SkiaSharp GitHub:
The chances are you are not clearing the canvas/bitmap first.
You can either do bitmap.Erase(SKColors.Transparent) or canvas.Clear(SKColors.Transparent) (you can use any color).
The reason for this is performance. When creating a new bitmap, the computer has no way of knowing what background color you want. So, if it was to go transparent and you wanted white, then there would be two draw operations to clear the pixels (and this may be very expensive for large images).
During the allocation of the bitmap, the memory is provided, but the actual data is untouched. If there was anything there previously (which there will be), this data appears as colored pixels.
When I've seen that before, it's been because the memory passed to SkiaSharp was not zeroed. As an optimization, though, Skia assumes that the memory block passed to it is pre zeroed. Resultingly, if your first operation is a clear, it will ignore that operation, because it thinks that the state is already clean. To resolve this issue, you can manually zero the memory passed to SkiaSharp.
public static SKSurface CreateSurface(int width, int height)
{
// create a block of unmanaged native memory for use as the Skia bitmap buffer.
// unfortunately, this may not be zeroed in some circumstances.
IntPtr buff = System.Runtime.InteropServices.Marshal.AllocCoTaskMem(width * height * 4);
byte[] empty = new byte[width * height * 4];
// copy in zeroed memory.
// maybe there's a more sanctioned way to do this.
System.Runtime.InteropServices.Marshal.Copy(empty, 0, buff, width * height * 4);
// create the actual SkiaSharp surface.
var colorSpace = CGColorSpace.CreateDeviceRGB();
var bContext = new CGBitmapContext(buff, width, height, 8, width * 4, colorSpace, (CGImageAlphaInfo)bitmapInfo);
var surface = SKSurface.Create(width, height, SKColorType.Rgba8888, SKAlphaType.Premul, bitmap.Data, width * 4);
return surface;
}
Edit: btw, I assume this is a bug in SkiaSharp. The samples/apis that create the buffer for you should probably be zeroing it out. Depending on the platform it can be hard to repro as the memory alloc behaves differently. More or less likely to provide you untouched memory.

In Processing, how can I save part of the window as an image?

I am using Processing under Fedora 20, and I want to display an image of the extending tracks of objects moving across part of the screen, with each object displayed at its current position at the end of the track. To avoid having to record all the co-ordinates of the tracks, I usesave("image.png"); to save the tracks so far, then draw the objects. In the next frame I use img = loadImage("image.png"); to restore the tracks made so far, without the objects, which would still be in their previous positions.. I extend the tracks to their new positions, then usesave("image.png"); to save the extended tracks, still without the objects, ready for the next loop round. Then I draw the objects in their new positions at the end of their extended tracks. In this way successive loops show the objects advancing, with their previous positions as tracks behind them.
This has worked well in tests where the image is the whole frame, but now I need to put that display in a corner of the whole frame, and leave the rest unchanged. I expect that createImage(...) will be the answer, but I cannot find any details of how to to so.
A similar question asked here has this recommendation: "The PImage class contains a save() function that exports to file. The API should be your first stop for questions like this." Of course I've looked at that API, but I don't think it helps here, unless I have to create the image to save pixel by pixel, in which case I would expect it to slow things down a lot.
So my question is: in Processing can I save and restore just part of the frame as an image, without affecting the rest of the frame?
I have continued to research this. It seems strange to me that I can find oodles of sketch references, tutorials, and examples, that save and load the entire frame, but no easy way of saving and restoring just part of the frame as an image. I could probably do it using Pimage but that appears to require an awful lot of image. in front of everything to be drawn there.
I have got round it with a kludge: I created a mask image (see this Processing reference) the size of the whole frame. The mask is defined as grey areas representing opacity, so that white, zero opacity (0), is transparent and black, fully opaque (255) completely conceals the background image, thus:
{ size (1280,800);
background(0); // whole frame is transparent..
fill(255); // ..and..
rect(680,0,600,600); // ..smaller image area is now opaque
save("[path to sketch]/mask01.jpg");
}
void draw(){}
Then in my main code I use:
PImage img, mimg;
img = loadImage("image4.png"); // The image I want to see ..
// .. including the rest of the frame which would obscure previous work
mimg = loadImage("mask01.jpg"); // create the mask
//apply the mask, allowing previous work to show though
img.mask(mimg);
// display the masked image
image(img, 0, 0);
I will accept this as an answer if no better suggestion is made.
void setup(){
size(640, 480);
background(0);
noStroke();
fill(255);
rect(40, 150, 200, 100);
}
void draw(){
}
void mousePressed(){
PImage img =get(40, 150, 200, 100);
img.save("test.jpg");
}
Old news, but here's an answer: you can use the pixel array and math.
Let's say that this is your viewport:
You can use loadPixels(); to fill the pixels[] array with the current content of the viewport, then fish the pixels you want from this array.
In the given example, here's a way to filter the unwanted pixels:
void exportImage() {
// creating the image to the "desired size"
PImage img = createImage(600, 900, RGB);
loadPixels();
int index = 0;
for(int i=0; i<pixels.length; i++) {
// filtering the unwanted first 200 pixels on every row
// remember that the pixels[] array is 1 dimensional, so some math are unavoidable. For this simple example I use the modulo operator.
if (i % width >= 200) { // "magic numbers" are bad, remember. This is only a simplification.
img.pixels[index] = pixels[i];
index++;
}
}
img.updatePixels();
img.save("test.png");
}
It may be too late to help you, but maybe someone else will need this. Either way, have fun!

GraphicsView fitInView() very pixelated result when downshrinking

I have searched everywhere and i cannot find any solution after 2 days of trying.
The Problem:
I'm doing an image Viewer with "Fit Image to View" feature. I load a picture of say 3000+ pixels in my GraphicsView (which is a lot smaller ofcourse), scrollbars appear that's good. When i click my btnFitView and executed:
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
This is down scaling right? After fitInView() all lines are pixelated. It looks like a saw went over the lines on the image.
For example: image of a car has lines, image of a texbook (letters become in very bad quality).
My code sample:
// select file, load image in view
QString strFilePath = QFileDialog::getOpenFileName(
this,
tr("Open File"),
"/home",
tr("Images (*.png *.jpg)"));
imageObject = new QImage();
imageObject->load(strFilePath);
image = QPixmap::fromImage(*imageObject);
scene = new QGraphicsScene(this);
scene->addPixmap(image);
scene->setSceneRect(image.rect());
ui->graphicsView->setScene(scene);
// on_btnFitView_Clicked() :
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
Just before fitInView(), sizes are:
qDebug()<<"sceneRect = "<< scene->sceneRect();
qDebug()<<"viewRect = " << ui->graphicsView->rect();
sceneRect = QRectF(0,0 1000x750)
viewRect = QRect(0,0 733x415)
If it is necessary i can upload screenshots of original loaded image and fitted in view ?
Am i doing this right? It seems all examples on the Web work with fitInView for auto-fitting. Should i use some other operations on the pixmap perhaps?
SOLUTION
// LOAD IMAGE
bool ImgViewer::loadImage(const QString &strImagePath)
{
m_image = new QImage(strImagePath);
if(m_image->isNull()){
return false;
}
clearView();
m_pixmap = QPixmap::fromImage(*m_image);
m_pixmapItem = m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem);
// preserve fitView if active
if(m_IsFitInView)
fitView();
return true;
}
// TOGGLED FUNCTIONS
void ImgViewer::fitView()
{
if(m_image->isNull())
return;
this->resetTransform();
QPixmap px = m_pixmap; // use local pixmap (not original) otherwise image is blurred after scaling the same image multiple times
px = px.scaled(QSize(this->width(),this->height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(px);
m_scene->setSceneRect(px.rect());
}
void ImgViewer::originalSize()
{
if(m_image->isNull())
return;
this->resetTransform();
m_pixmap = m_pixmap.scaled(QSize(m_image.width(),m_image.height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem); //ensure item is centered in the view.
}
On downshrink this produces good quality. Here are some stats after calling these 2 functions:
// "originalSize()" : IMAGE SIZE = (1152, 2048)
// "originalSize()" : PIXMAP SIZE = (1152, 2048)
// "originalSize()" : VIEW SIZE = (698, 499)
// "originalSize()" : SCENE SIZE = (1152, 2048)
// "fitView()" : IMAGE SIZE = (1152, 2048)
// "fitView()" : PIXMAP SIZE = (1152, 2048)
// "fitView()" : VIEW SIZE = (698, 499)
// "fitView()" : SCENE SIZE = (280, 499)
There is a problem now, after call to fitView() look the size of scene? Much smaller.
And if fitView() is activated, and I now scale the image on wheelEvent (zoomIn/zoomOut), with the views scale function: scale(factor,factor); ..produces terrible result.
This doesn't happen with originalSize() where scene size is equal to image size.
Think of the view as a window into the scene.
Moving the view large amounts, either zooming in or out, will likely create images that don't look great. Rather than the image being scaled as you would expect, the view is just moving away from the scene and doing its best to render the image, but the image has not been scaled, just transformed in the scene.
Rather than using QGraphicsView::fitInView, keep the main image in memory and create a scaled version of the image with QPixamp::scaled, each time FitInView is selected, or the user zooms in / out. Then set this QPixmap on the QGraphicsPixmapItem with setPixmap.
You may also want to think about dropping the scroll bars and allowing the user to drag the image around the screen, which provides a better user interface, in my opinion; though of-course it depends on your requirements.

Resizing and saving an image in WinMobile and .NET CF throws OutOfMemoryException

I have a WinMobile app which allows the user the snap a photo with the camera, and then use for for various things. The photo can be snapped at 1600x1200, 800x600 or 640x480, but it must always be resized to 400px for the longest size (the other is proportional of course). Here's the code:
private void LoadImage(string path)
{
Image tmpPhoto = new Bitmap(path);
// calculate new bitmap size...
double width = ...
double height = ...
// draw new bitmap
Image photo = new Bitmap(width, height, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
using (Graphics g = Graphics.FromImage(photo))
{
g.FillRectangle(new SolidBrush(Color.White), new Rectangle(0, 0, photo.Width, photo.Height));
int srcX = (int)((double)(tmpPhoto.Width - width) / 2d);
int srcY = (int)((double)(tmpPhoto.Height - height) / 2d);
g.DrawImage(tmpPhoto, new Rectangle(0, 0, photo.Width, photo.Height), new Rectangle(srcX, srcY, photo.Width, photo.Height), GraphicsUnit.Pixel);
}
tmpPhoto.Dispose();
// save new image and dispose
photo.Save(Path.Combine(config.TempPath, config.TempPhotoFileName), System.Drawing.Imaging.ImageFormat.Jpeg);
photo.Dispose();
}
Now the problem is that the app breaks in the photo.Save call, with an OutOfMemoryException. And I don't know why, since I dispose the tempPhoto (with the original photo from the camera) as soon as I can, and I also dispose the Graphics obj. Why does this happen? It seems impossible to me that one can't take a photo with the camera and resize/save it without making it crash :( Should I restor t C++ for such a simple thing?
Thanks.
Have you looked at memory usage with each step to see exactly where you're using the most? You omitted your calculations for width and height, but assuming they are right you would end up with photo requiring 400x300x3 (24bits) == 360k for the bitmap data itself, which is not inordinately large.
My guess is that even though you're calling Dispose, the resources aren't getting rleased, especially if you're calling this method multiple times. The CF behaves in an unexpected way with Bitmaps. I call it a bug. The CF team doesn't.

Resources