This question already has an answer here:
Processing mirror image over x axis?
(1 answer)
Closed 4 years ago.
How can I flip (mirror) an image along the Y-axis in Processing 3.4? I have tried scale(-1,1) but that just makes my image disappear.
If you call scale(-1, 1) then your X values are flipped, and you have to adjust your arguments accordingly. Here's an example:
size(500, 500);
PImage img = loadImage("my_image.jpg");
scale(-1, 1);
image(img, -500, 0, width, height);
Personally I find this very confusing, so I would avoid calling scale() with negative numbers. There are a number of ways to flip an image: I would probably use the get() function to get the colors from the image and copy them into a PGraphics instance.
Related
This question already has answers here:
what is the unit for the Three js objects?
(2 answers)
Closed 4 years ago.
Let's say I have a mesh. And want to position it in 3D using the following:
mesh.position.set(5, 5, 5);
What are those numbers? I'm sure it isn't 5 pixels for x. For example:
mesh.position.x
Would indeed return 5, but this is where things get tricky.
I'm using the raycaster in order to know where the mouse is which is either Normalized Coordinate System or Pixel using the following:
function onMouseMove(event){ // Calculate mouse movements
// Normalized Coordinate System
mouse.ncs.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.ncs.y = - (event.clientY / window.innerHeight) * 2 + 1;
// Pixel coordinates
mouse.p.x = event.clientX;
mouse.p.y = event.clientY;
};
But how would I say 5 equal this amount of pixels? Any help on this will be much appreciated.
When you use Raycaster.setFromCamera() you must use normalized coordinates, so from [-1, 1], which it looks like you're already converting to. The units of measurement of the 3D world don't matter, the ray will calculate whether it's being intersected by your object or not. Try following this example in the docs.
To answer your original question, Three.js and WebGL at large are natively "unitless". You can pretend the units are in inches, meters, whatever you want them to be. The only instance where units matter is with THREE.CSS3DRenderer, where units are a mix of pixels and unitless because you're dealing with HTML elements.
I am trying to detect two vertical lines shown in the attached images using some image processing methods. The line are in low contrast.
The location is shown in the first image with yellow arrows.
The original image is also attached.
I tried using adaptiveThresholding and normal thresholding using maximum and minimum at local windows. But I can't detect the lines.
Any ideas how to detect the two vertical lines in image processing?
There is some trick when contrast is low in bright pixels. There is thresholding method - otsu thresholding (https://en.wikipedia.org/wiki/Otsu%27s_method), which can be used to detect bright side of histogram. After that, you can normalize that part of histogram to (0,255) and set 0 to darker pixels as in code below:
cv::Mat img = cv::imread("E:\\Workspace\\KS\\excercise\\sjB8q.jpg", 0);
cv::Mat work;
for (int i = 0; i < 4; i++) // number of iterations has to be adjusted
{
cv::threshold(img, work, 30, 255, CV_THRESH_OTSU);
cv::bitwise_and(img, work,img);
cv::normalize(img, img, 0, 255, cv::NORM_MINMAX, -1, work);
}
Then your contrast will be better like in pictures below (for different iterations):
i=2:
i=4:
i =6:
After that preprocessing detecting dark lines should be easier. That answer is just explanation of idea. If you want to know more just ask in comment.
What value fed to strokeWidth() will give a stroke width of one pixel regardless of the current scale() setting?
I think strokeWeight(0) should work. Here is an example:
void setup() {
size(100,100);
noFill();
scale(10);
// 1st square, stroke will be 10 pixels
translate(3,3);
strokeWeight(1);
beginShape();
vertex(-1.0, -1.0);
vertex(-1.0, 1.0);
vertex( 1.0, 1.0);
vertex( 1.0, -1.0);
endShape(CLOSE);
// 2nd square, stroke will be 1 pixel
translate(3,3);
strokeWeight(0);
beginShape();
vertex(-1.0, -1.0);
vertex(-1.0, 1.0);
vertex( 1.0, 1.0);
vertex( 1.0, -1.0);
endShape(CLOSE);
}
Kevin did offer a couple of good approaches.
Your question doesn't make it clear what level of comfort you have with the language. My assumption (and I could be wrong) is that the layers approach isn't clear as you might have not used PGraphics before.
However, this option Kevin provided is simple and straight forward:
multiplying the coordinates manually
Notice most drawing functions take not only the coordinates, but also dimensions ?
Don't use scale(), but keep track of a multiplier floating point variable that you use for the shape dimensions. Manually scale the dimensions of each shape:
void draw(){
//map mouseX to a scale between 10% and 300%
float scale = map(constrain(mouseX,0,width),0,width,0.1,3.0);
background(255);
//scale the shape dimensions, without using scale()
ellipse(50,50, 30 * scale, 30 * scale);
}
You can run this as a demo bellow:
function setup(){
createCanvas(100,100);
}
function draw(){
//map mouseX to a scale between 10% and 300%
var scale = map(constrain(mouseX,0,width),0,width,0.1,3.0);
background(200);
//scale the shape dimensions, without using scale()
ellipse(50,50, 30 * scale, 30 * scale);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.7/p5.min.js"></script>
Another answer is in the question itself: what value would you feed to strokeWidth() ? If scale() is making the stroke bigger, but you want to keep it's appearance the same, that means you need to use a smaller stroke weight as scale increases: the thickness is inversely proportional to the scale:
void draw(){
//map mouseX to a scale between 10% and 300%
float scale = map(constrain(mouseX,0,width),0,width,0.1,3.0);
background(255);
translate(50,50);
scale(scale);
strokeWeight(1/scale);
//scaled shape, same appearing stroke, just smaller in value as scale increases
ellipse(0,0, 30, 30);
}
You can run this bellow:
function setup(){
createCanvas(100,100);
}
function draw(){
//map mouseX to a scale between 10% and 300%
var scaleValue = map(constrain(mouseX,0,width),0,width,0.1,3.0);
background(240);
translate(50,50);
scale(scaleValue);
strokeWeight(1/scaleValue);
//scale the shape dimensions, without using scale()
ellipse(0,0, 30, 30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.7/p5.min.js"></script>
Kevin was patient, not only to answer your question, but also your comments, being generous with his time. You need to be patient to carefully read and understand the answers provided. Try it on your own then come back with specific questions on clarifications if that's the case. It's the best way to learn.
Simply asking "how do I do this ?" without showing what you're tried and what your thinking behind the problem is, expecting a snippet to copy/paste will not get your very far and this is not what stackoverflow is about.
You'll have way more to gain by learning, using the available documentation and especially thinking about the problem on your own first. You might not crack the problem at the first go (I know I certainly don't), but reasoning about it and viewing it from different angles will get your gears going.
Always be patient, it will serve you well on the long run, regardless of the situation.
Update Perhaps you mean by
What value fed to strokeWidth() will give a stroke width of one pixel regardless of the current scale() setting?
is how can you draw without anti-aliasing ?
If so, you can disable smoothing via a line: calling noSmooth(); once in setup(). Try it with the example code above.
None.
The whole point of scale() is that it, well, scales everything.
You might want to draw things in layers: draw one scaled layer, and one unscaled layer that contains the single-pixel-width lines. Then combine those layers.
That won't work if you need your layers to be mixed, such as an unscaled line on top of a scaled shape, on top of another scaled line. In that case you'll just have to unscale before drawing your lines, then scale again to draw your shapes.
PIX* returnRotatedImage(PIX* image, float theta)
{
PIX* rotated = pixRotate(image, -theta, L_ROTATE_AREA_MAP, L_BRING_IN_BLACK, image->w, image->h);
return rotated;
}
When I execute the above code on an image, the resulting image has the edges cut off.
Example: the original scan, followed by the image after being run through the above function to rotate it by ~89 degrees.
I don't have 10 reputation yet, so I can't embed the images, but here's a link to the two pictures: http://imgur.com/a/y7wAn
I need it to work for arbitrary angles as well (not just angles close to 90 degrees), so unfortunately the solution presented here won't work.
The description for the pixRotate function says:
* (6) The dest can be expanded so that no image pixels
* are lost. To invoke expansion, input the original
* width and height. For repeated rotation, use of the
* original width and height allows the expansion to
* stop at the maximum required size, which is a square
* with side = sqrt(w*w + h*h).
however it seems to be expanding the destination after rotation, and thus the pixels are lost, even if the final image size is correct. If I use pixRotate(..., 0, 0) instead of pixRotate(..., w, h), I end up with the image rotated within the original dimensions: http://i.imgur.com/YZSETl5.jpg.
Am I interpreting the pixRotate function description incorrectly? Is what I want to do even possible? Or is this possibly a bug?
Thanks in advance.
I have an application that accepts images as input and removes the background on which the image was taken. For example, if you pass in an image of a book on a blanket, the resulting image will be just the book with a transparent background.
The problem I have is when you input an image that has a large empty space in it, e.g. an elastic band. The floodfill algorithm starts at the corners of the image and removes the background of the picture, but of course it never makes it into the interior of the elastic band.
Is there a way of implementing this such that I can take an image of a closed circle against a background and get back just the loop itself, with no background inside or outside it?
You could always resample the image after every flood fill, and restart it whenever you find a color that matches the original background.
Flood fill algorithms are designed to start in one spot, and from there fill a constrained area, an area of similar colors. The circle does not match that background color, so the fill algorithm doesn't "jump" it to find others.
The solution is to flood different areas.
Here is a very crude, recursive, slow flood fill algorithm (from memory, untested):
public void floodfill(Image img, int x, int y, Color oldColor, Color newColor) {
// Check boundary
if (img.contains(x, y)) {
// Get current pixel color
Color currentColor = img.getColor(x, y);
// Check color match
if (currentColor.equals(oldColor)) {
// Set to new color
img.setColor(x, y, newColor);
// Start again on each of the neighbors
floodFill(img, x - 1, y, oldColor, newColor);
floodFill(img, x + 1, y, oldColor, newColor);
floodFill(img, x, y - 1, oldColor, newColor);
floodFill(img, x, y + 1, oldColor, newColor);
}
}
}
This question, and its answers address a very similar problem.
you can figure out what the predominant color of the background is (Which you should be able to do since you're able to remove the background starting at the corners) and look for that color everywhere else in the image.