Create new .png Image in D - image

I am trying to create a .png image that is X pixels tall and Y pixels short. I am not finding what I am looking for on dlang.org, and am struggling to find any other resources via google.
Can you please provide an example of how to create a .png image in D?
For example, BufferedImage off_Image = new BufferedImage(100, 50, BufferedImage.TYPE_INT_ARGB); from http://docs.oracle.com/javase/tutorial/2d/images/drawonimage.html is what I am looking for (I think), except in the D programming language.

I wrote a little lib that can do this too. Grab png.d and color.d from here:
https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff
import arsd.png;
void main() {
// width * height
TrueColorImage image = new TrueColorImage(100, 50);
// fill it in with a gradient
auto colorData = image.imageData.colors; // get a ref to the color array
foreach(y; 0 .. image.height)
foreach(x; 0 .. image.width)
colorData[y * image.width + x] = Color(x * 2, 0, 0); // fill in (r,g,b,a=255)
writePng("test.png", image); // save it to a file
}

There is nothing in standard library for image work but you should be able to use DevIL or FreeImage to do what you want. Both of them have Derelict bindings.
DevIL (derelict-il)
FreeImage (derelict-fi)
Just use the C API documentation for either of them.

There is no standard 2D or 3D graphics API in Phobos, nor there is something similar to the ImageIO API from Java. However, there are plenty of D libraries written by various individuals, as well as various bindings to C/C++ libraries that could aid you in what you are doing. I am sure you should be able to accomplish what you need by using some parts of the GtkD .

I'd like to offer an alternative to Adam's solution - dlib has quite a few modules that come in handy when writing multi-media applications - image manipulation, linear algebra as well as geometry processing, I/O streams done right, basic XML parsing and others. It's still getting some development on the core interfaces (as of February 2014), but that should get pretty stable within a few weeks.
With dlib, that example code would translate to:
import dlib.image;
// width * height
auto image = new Image!(PixelFormat.RGB8)(100, 50);
// fill it in with a gradient
foreach(y; 0 .. image.height)
foreach(x; 0 .. image.width)
image[x, y] = Color4f(x * 2 / 255.0f, 0, 0);
savePNG(image, "test.png");
Grabbing the bytes directly is of course possible too, but why not do it the easier way? Premature optimization, etc.
If you're building your application with dub (which you probably should), using the latest and best of dlib is as simple as adding "dlib": "~master" to your dependencies.

Related

Capture screenshot: native API vs opengl

In my program I am using Qt's function: qApp->primaryScreen()->grabWindow(qApp->desktop()->winId(),x_offset,y_offset,w,h);
But its a little bit slowly for a "main task". So I've collide with a question above. The program able to work under Windows and Mac OS X. I heard about opengl as nice screengrabber since it is closer to GPU than native API plus its a cross platform solution. This is the first knowledge I want to get: Opengl as desktop screengrabber, is it real? I mean like a button "print screen".
If it is, how?If its not:
Windows: can you please give advice how to? BitBlt, GetDC, smthing like this?
Mac OS X: AVFoundation? Please, can you describe this or give some link about how to capture screenshot using this class? (Its a hard way since I know about Objective-C(++) almost nothing)
UPDATE: I read a lot about ways to capture screenshot. There are some knowledge:1. Opengl (maybe) is real as screengrabber, but use it will be irrcorrect for this software. Btw, I don't care, if there is some solution I will accept it.
2. DirectX it is not a way to solve my problem since this software does not work under Mac OS X.
Just to expand on #Zhenyi Luo's answer, here is a code snippet I have used in the past.
It also uses FreeImage for exporting the screenshot.
void Display::SaveScreenShot (std::string FilePath, SCREENSHOT_FORMAT Format){
// Create Pixel Array
GLubyte* pixels = new GLubyte [3 * Window::width * Window::height];
// Read Pixels From Screen And Buffer Into Array
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glReadPixels (0, 0, Window::width, Window::height, GL_BGR, GL_UNSIGNED_BYTE, pixels);
// Convert To FreeImage And Save
FIBITMAP* image = FreeImage_ConvertFromRawBits (pixels, Window::width,
Window::height, 3 * Window::width, 24,
0x0000FF, 0xFF0000, 0x00FF00, false);
FreeImage_Save ((FREE_IMAGE_FORMAT) Format, image, FilePath.c_str (), 0);
// Free Resources
FreeImage_Unload (image);
delete [] (pixels);
}
glReadPixels( GLint x,
GLint y,
GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
GLvoid * data), would read a block of pixels into client memory starting at location data.

Rotating text using center in itext

I'm converting a document which is created with a online editor. I need to be able to rotate the text using it's central point and not (0,0).
Since Image rotates using the centre I assumed this would work but it doesn't. Anyone has a clue how to rotate text using the central point in itext?
float fw = bf.getWidthPoint(text, textSize);
float fh = bf.getAscentPoint(text, textSize) - bf.getDescentPoint(text, textSize);
PdfTemplate template = content.createTemplate(fw, fh);
Rectangle r = new Rectangle(0,0,fw, fw);
r.setBackgroundColor(BaseColor.YELLOW);
template.rectangle(r);
template.setFontAndSize(bf, 12);
template.beginText();
template.moveText(0, 0);
template.showText(text);
template.endText();
Image tmpImage = Image.getInstance(template);
tmpImage.setAbsolutePosition(Utilities.millimetersToPoints(x), Utilities.millimetersToPoints(pageHeight-(y+Utilities.pointsToMillimeters(fh))));
tmpImage.setRotationDegrees(0);
document.add(tmpImage);
Why are you using beginText(), moveText(), showText(), endText()? Those methods are to be used by developers who speak PDF syntax fluently. Other developers should avoid using those low-level methods, because they are bound to make errors.
For instance: you're using the setFontAndSize() method outside a text object. That method is forbidden in graphics state, you can only use it in text state. Although Adobe Reader will probably tolerate it, Acrobat will complain when doing a syntax check.
Developers who don't speak PDF syntax fluently are advised to use convenience methods as demonstrated in the TextMethods example. Take a look at text_methods.pdf. The text "Away again Center" is probably what you need.
However, there's even an easier way to achieve what you want. Just use the static showTextAligned() method that is provided in the ColumnTextclass. That way you don't even need to use beginText(), setFontAndSize(), endText():
ColumnText.showTextAligned(template,
Element.ALIGN_CENTER, new Phrase(text), x, y, rotation);
Using the showTextAligned() method in ColumnText also has the advantage that you can use a phrase that contains chunks with different fonts.
For those who use Paragraph in itext 7 :
paragraph.setFixedPosition(....)
paragraph.setRotationAngle(Math.toRadians(....));
paragraph.setProperty(Property.ROTATION_POINT_X, textCenterX);
paragraph.setProperty(Property.ROTATION_POINT_Y, textCenterY);

How to generate texture mapping images?

I want to put/wrap images to 3D objects. To keep things simple and fast, instead of using(and learning) a 3D library I want to use mapping images. Mapping images are used in such a way:
So you generate the mapping images once for each object and use the same mapping for all images you want to wrap.
My question is how can I generate such mapping images (given the 3D model)? Since I don't know about the terminology my searches failed me. Sorry if I am using the wrong jargon.
Below you can see a description of the workflow.
I have the 3D model of the object and the input image, i want to generate mapping images that I can use to generate the textured image.
I don't even know where to start, any pointers are appreciated.
More info
My initial idea was to somehow wrap a identity mappings (see below) using an external program. I have generated horizontal and vertical gradient images in Photoshop just to see if mapping works using photoshop generated images. The result doesn't look good. I wasn't hopeful but it was worth a shot.
input
mappings (x and y), they just resize the image, they don't do anything fancy.
result
as you can see there are lots of artifacts. Custom mapping images I have generated by warping the gradients even looks worse.
Here is some more information on mappings: http://www.imagemagick.org/Usage/mapping/#distortion_maps
I am using OpenCV remap() function for mapping.
if i understand you right here, you want to do all of it in 2D ?
calling warpPerspective() for each of your cube surfaces will be much more successful, than using remap()
pseudocode outline:
// for each surface:
// get the desired src and dst polygon
// the src one is your texture-image, so that's:
vector<Point> p_src(4), p_dst(4);
p_src[0] = Point(0,0);
p_src[1] = Point(0,src.rows-1);
p_src[2] = Point(src.cols-1,0);
p_src[3] = Point(src.cols-1,src.rows-1);
// the dst poly is the one you want textured, a 3d->2d projection of the cube surface.
// sorry, you've got to do that on your own ;(
// let's say, you've come up with this for the cube - top:
p_dst[0] = Point(15,15);
p_dst[1] = Point(44,19);
p_dst[2] = Point(56,30);
p_dst[3] = Point(33,44);
// now you need the projection matrix to transform from one to another:
Mat proj = getPerspectiveTransform( p_src, p_dst );
// finally, you can warp your texture to the dst-polygon:
warpPerspective(src, dst, proj, dst.size());
if you can get hold of the 'Learning Opencv' book, it's described around p 170.
final word of warning, since youre complaining about artefacts, - yes, it'll all look pretty cheesy, 'real' 3d engines do a lot of work here, subpixel-uv mapping, filtering,
mipmapping, etc. if you want it to look nice, consider using the 'real' thing.
btw, there's nice opengl support built into opencv
To achieve what you are trying to do, you need to render the 3D-models UV to a texture. It will be easier to learn to render 3D than to do things this way. Especially since there are a lot of weaknesses in your aproach. difficult to to lighting and problems til the depth-buffer will be abundant.
Assuming all your objects shul ever only be viewed from one angle, you need to render each of them to 3 textures:
UV-map
Normal-map
Depth-map (to correct the depth-buffer)
You will still have to do shading in order to draw these to look like your object, and I don't even know how to do the depth-buffer-thing, I just know it can be done.
So in order to avoid learning 3D, your will have to learn all the difficult parts of 3D-rendering. Does not seem the easier route...

How to see image data using opencv in visual studio?

I wrote an OPENCV project in VS2010 and the results were not the ones as I expected so I ran the debugger to see where is the problem. When I wanted to see the data inside the image loaded I didn't know how to do it so if I want to see the data inside my images what should I do?
It is pretty simple in matlab for seeing different channel of an image i.e.
a=imread('test.jpg');
p1 = a(:,:,1)
p2 = b(:,:,2)
.
.
In opencv I wrote the same thing but I don't know how to see all the element at once just like Matlab.
a= imread("test.jpg")
split(a,planes);
vector<Mat> T1;
T1 = planes[0];
// How can I see the data inside T1 when debugging the code ?
I think this is what you are looking for - it's a great Visual Studio add-on
https://bitbucket.org/sergiu/opencv-visualizers
Just download the installer, make sure VS is closed, run it, re-open VS and voila! Now, when you point to an OpenCV data structure, all kinds of nice info is showed.
Limitations: I saw some problems with multichannel images (it only shows the first channel) and it also has trouble displaying large matrices. If you want to see raw data in a big matrix, you can use the old good VS trick with debug variables: Stop at a breakpoint, go to Watch tab, and write there
((float*)myMat.data) ,10
Where float is the matrix type, myMat is your matrix, and 10 is the number of values you want to print. It will display the first 10 values at the memory location of myMat.data. If you do not correctly choose the data type, you'll see garbage. In my example, myMat is of type cv::Mat.
And never forget the power of visualizers:
imshow("Image", myMat);
If your data fits into an image. You can use the contrib module's colormap to enhance your visualizers.
I can't actually believe that nobody suggested Image Watch yet. It's the most amazing add-in ever. It shows you a view with all your Mat variables (images (gray and color), matrices) while debugging, there's useful stuff like zooming or contrast-stretching and you can even apply more complex functions directly in the plugin in real-time. It makes debugging of any kind of image operations a breeze and it's immensely helpful if you do calculations and linear algebra stuff with your cv::Mat matrices.
I recommend to use a NativeViewer extension. It actually displays the content of an image in a preview window, not only the properly formatted info.
If you don't want to use a plug-in or extension to Visual Studio, you can access the elements one by one in the debugging watch tab by typing this:
T1.data[T1.step.buf[0]*i + T1.step.buf[1]*j];
where i is the row you want to look at and j is the column.
after downloading imagewatch use the command in watch window
(imagesLoc._Myfirst)[0]
index of image in the vector
You can use the immediate window and the extenshion method like this one
/// <summary>
/// Displays image
/// </summary>
public static void Display (this Mat m, Rect rect = default, string windowName = "")
{
if (string.IsNullOrEmpty(windowName))
{
windowName = m.ToString();
}
var img = rect == default ? m : m.Crop(rect);
double coef = Math.Min(1600d / img.Width, 800d / img.Height);
Cv2.ImShow(windowName, img.Resize(new Size(coef * img.Width, (coef * img.Height) > 1 ? coef * img.Height : 1)));
Cv2.WaitKey();
}
Then you stop at a breakpoint and call yourImage.Display() in the immediate window.
If you can use CLion you can utilize the OpenCV Image Viewer plugin, which displays matrices while debugging just on click.
https://plugins.jetbrains.com/plugin/14371-opencv-image-viewer
Disclaimer: I'm an author of this plugin

FreeImage portable float map (PFM) RGB channel order

I'm currently using FreeImage to load PFMs into a program that otherwise uses IplImages (the old data type for OpenCV). Here's a sample of what I'm doing (ignore the part about img being an array of Mats, that's related to some other code).
FIBITMAP *src;
// Load a PFM file using freeimage
src = FreeImage_Load(FIF_PFM, "test0.pfm", 0);
Mat* img;
img = new Mat[3];
// Create a copy of the image in an OpenCV matrix (using .clone() copies the data)
img[1] = Mat(FreeImage_GetHeight(src), FreeImage_GetWidth(src), CV_32FC3, FreeImage_GetScanLine(src, 0)).clone();
// Flip the image verticall because OpenCV row ordering is reverse of FreeImage
flip(img[1], img[1], 0);
// Save a copy
imwrite("OpenCV_converted_image.jpg", img[1]);
What's strange is that if I use FreeImage to load JPEGs instead by changing FIF_PFM to FIF_JPEG and CV_32FC3 to CV_8U, this works fine, i.e. the copied picture comes out unchanged. This makes me think that OpenCV and FreeImage generally agree on the ordering of RGB channels, and that the problem is related to PFMs specifically and their being a non-standardized format.
The PFMs I'm loading were written with this code (under "Local Histogram Equalization"), which appears to write them in RGB order although I could be wrong about that. It just takes the data from a MATLAB 3D matrix of doubles and dumps it into a file using fwrite. Also, if I modify that code to write PPMs instead, then view them in IrfanView, they look correct.
So, that leaves me thinking FreeImage is taking the file data to be BGR ordered on disk already which it is not, and should not be.
Any thoughts? Is there an error in FreeImage's reading of PFMs, or is there something more subtle going on here? Thanks.
Well, I never really got this one sorted out; long story short, FreeImage and OpenCV agree on color channel order (BGR) when loading most image formats, but not when loading PFMs. I can only assume that the makers of FreeImage have therefore misinterpreted the admittedly not very solidified specs for PFMs. Since I was only using FreeImage to read/write PFMs, and it was proving quite complicated to get data back into a FreeImage structure after processing with OpenCV functions, I wrote my own PFM read/write code which turned out to be very simple.

Resources