image_map in Pov-ray not working as expected - image

I would like to map an image I have to the face of a box in Pov-ray.
The image's dimensions are 1500x1125
(Example Image)
So I set up a scene with a light source above a camera looking at a box
camera{location <3,1.8,0> look_at <3,1.8,1>}
light_source{<3,20,0> color rgb <1,1,1>}
box{<0,0,0> <1,0.75,1> texture{pigment{image_map{png "Test1.png"}}} translate <2.5,1.425,3>}
The box's dimensions are 1x0.75 (z not relevant) which has the same 4:3 ratio as the image.
However, when the scene is rendered, the width of the image maps perfectly onto the box but some of the height is cut off. The image does not look stretched and I am confused why it does not fit.

IIRC, porvray will always read images as if they had a 1:1 aspect ratio.
If you insert a scale inside your pigment statement, before using it, that should fix it:
box{
<0,0,0> <1,0.75,1>
texture{
pigment {
image_map{png "Test1.png"}
scale <1, 0.75, 1>
}
} translate <2.5,1.425,3>
}
(I apologize for not testing this to be really sure right now).

Related

Keep image centered in resized JavaFX Canvas

I am getting my feet wet with JavaFX, and have a simple drawing program which writes to a Canvas using a PixelWriter. The program draws a pixel at a time, reflecting each pixel over a number of axes to create a growing and evolving pattern centered on the canvas:
The Canvas is in the Center region of a BorderPane, and I have written the code to resize the canvas when the application window is resized. That works OK.
However, I would like to re-center the image on the new resized canvas so that the drawing can continue to grow on the larger canvas. What might be the best approach?
My ideas/attempts so far:
Capture a snapshot of the canvas and write it back to the resized canvas, but that comes out blurry (a couple of code examples below).
I dug into GraphicsContext translations, but that does not seem to move the existing image, just adjusts future drawing.
Maybe instead of resizing the canvas, I make a huge canvas bigger than I would expect my app window to be, and center it over the center region of the border pane (perhaps using a viewport of some kind?) I'm not thrilled about the whole idea of making some arbitrarily huge canvas that I think will be big enough though. I don't want to get into scaling - I am using PixelWriter so that I get the crispest image without antialiasing and other processing.
My snapshot attempt looked like this, but was blurry:
SnapshotParameters params = new SnapshotParameters();
params.setFill(Color.WHITE);
WritableImage image = canvas.snapshot(params, null);
canvas.getGraphicsContext2D().drawImage(image, 50, 50);
The 50, 50 offset above is just for my testing/learning - I'll replace it with a proper computed offset once I get the basic copy working. From the post How to copy contents of one canvas to another? I played with the setFill() parameter, to no effect.
From the post How to save a high DPI snapshot of a JavaFX Canvas I tried the following code. It was more clear, but I have not been able to figure out how to find or compute the pixelScale to get the most accurate snapshot (the value 10 is just some number I typed in bigger than 1 to see how it reacted):
int pixelScale = 10;
WritableImage image = new WritableImage((int)Math.rint(pixelScale * canvas.getWidth()),(int)Math.rint(pixelScale * canvas.getHeight()));
SnapshotParameters params = new SnapshotParameters();
params.setTransform(Transform.scale(pixelScale, pixelScale));
params.setFill(Color.WHITE);
canvas.snapshot(params, image);
canvas.getGraphicsContext2D().drawImage(image, 50, 50);
Thanks for any direction y'all can point me in!

How to get the approximate background color of an image?

Is there a way to get the approximate background color from an image in Flutter? I am getting my image from a URL. I don't need an exact background color: just an approximation - for instance, getting the color of the pixel in the top left corner (0, 0) would be just fine.
There seems to be no easy way to do this - I have tried many imaging packages, but they only provide "primary color" and not background color.
Old question, but for people still needing this, see the ImagePixels widget from the https://pub.dev/packages/image_pixels package (I am the author of this package):
#override
Widget build(BuildContext context) {
return ImagePixels(
imageProvider: image,
builder: (context, img) {
Color topLeftColor = img.pixelColorAt(0, 0);
Text("Pixel color at top-left: $topLeftColor.");
);}}
Note you could also get a dozen pixels all around the image (or at the corners of the image), and then average them. This would have a better chance of getting a good representative color.
Also note, if all you want to do with this color is to extend it to a larger area, there is a constructor that does that for you: ImagePixels.container.
Have you tried the image package?
If you just want the top left corner pixel, I believe you can read the image's pixels and get it.

matlab - how to make 3D axes size go outside of figure

I first created a figure with the background as a .png image. Then I created a 3D axes on top of the figure so that the 3D axes is placed on top of the .png image. Note that the .png image is not set inside of the 3D axes but is set outside of the axes in the figure itself.
I have a 3D .stl file of an apple set inside of the 3D axes (you can't see the apple by the way). When I move the apple around inside the 3D axes, using the option from the builtin Matlab figure toolbar, it works fine. But the problem here is that when I move the apple outside of the borders of the 3D axes, it disappears.
To tackle this problem, I want to set the size of the 3D axes so that its limits go outside of the figure, so that I can move my apple around anywhere in the figure without being limited to the size of the 3D axes. Note: I didn't make the 3D axes invisible, so that it is easier for people to understand my question. But when this problem is solved I will use axis off to make the 3D axes invisible, while retaining and displaying the apple.
Here is the main code
pearImage = 'pears.png';
appleModel = 'apple.stl';
backgroundImage = imread(pearImage);
[vertices,faces,~] = stlRead(appleModel);
axesHandle = axes('unit','normalized','position',[0 0 1 1]);
imagesc(backgroundImage)
set(axesHandle,'handlevisibility','off','visible','off')
uistack(axesHandle,'bottom')
stlPlot(vertices,faces)
Here is the function for stlPlot()
function stlPlot(vertices,faces)
object.vertices = vertices;
object.faces = faces;
patch(object,'FaceColor',[0.1 1.0 1.0],'EdgeColor','none')
camlight('headlight')
material('dull')
axis('image')
view([-135 35])
axis off % used to make the 3D axes invisible
I got the stlRead() and stlPlot() functions from here: https://kr.mathworks.com/matlabcentral/fileexchange/22409-stl-file-reader?focused=5193625&tab=function. Note that I edited the stlPlot() function to fit my purpose.
I believe you can solve this problem by changing the 'Clipping' property of the patch objects you create:
hPatch = patch(object, 'FaceColor', [0.1 1.0 1.0], 'EdgeColor', 'none', 'Clipping', 'off');
Or, more simply, you could probably just set the 'Clipping' property of the parent axes object itself (which controls clipping behavior for all of its children):
set(get(hPatch, 'Parent'), 'Clipping', 'off');

Nsimage draw in rect but the image is blurry?

designer give me picture like this
But when I use drawInRect API draw the picture in context, the picture is like this
The size of the rect is just the size of the image.And the image is #1x and #2x.
the difference is very clear, the picture is blurry and there is a gray line in the right of image, and My imac is retina resolution.
================================================
I have found the reason,
[self.headLeftImage drawInRect:NSMakeRect(100,
100,
self.headLeftImage.size.width,
self.headLeftImage.size.height)];
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(context);
CGContextTranslateCTM(context, self.center.x , self.center.y);
[self.headLeftImage drawInRect:NSMakeRect(100,
100,
self.headLeftImage.size.width,
self.headLeftImage.size.height)];
CGContextRestoreGState(context);
And in the first draw the image will not blur, but after translate the image is blurry. Just like the picture:
The problem is that you're translating the context to a non-integral pixel location. Then, the draw is honoring your request to put the image at a non-integral position, which causes it to be anti-aliased and color in some pixels partially.
You should convert the center point to device space, integral-ize it (e.g. by using floor()), and then convert it back. Use CGContextConvertPointToDeviceSpace() and CGContextConvertPointToUserSpace() to do the conversions. That does the right thing for Retina and non-Retina displays.

OpenGL blending renders black on black as gray?

I'm working in webGL. I'm pretty new to OpenGL. Having trouble with the blending function. My options look like:
gl.enable(gl.BLEND)
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
I render a source rectangle with color [0,0,0,0.5] on top of a destination background with color [0,0,0,1]. Based on everything I've read, I expect the result to be black. Instead it looks to be about 25% white. Here's what I get when I render red and black rectangles with alpha values ranging from 0.0 to 1.0.
View live demo and source here. Am I misunderstanding the blending function, and if so, how do I get what I expect? Thanks!
You should specify separate function for alpha:
gl.enable(gl.BLEND);
gl.blendEquation(gl.FUNC_ADD);
gl.blendFuncSeparate(
gl.SRC_ALPHA,
gl.ONE_MINUS_SRC_ALPHA,
gl.ONE,
gl.ONE_MINUS_SRC_ALPHA
);

Resources