event getx, based on screen size? - events

If i use the event.getx and y on touchlistener, how can i do to know if it's between 1/3 and 2/3 of the screen width?
public final float getX ()
Added in API level 1
getX(int) for the first pointer index (may be an arbitrary pointer identifier).
I dont know if the return coordinate is based on screen density, or screen pixels or whatever.. Does any1 have any idea how can i get screensize of my device then compare if the return coordinates are, for example, (between some margins) in the middle of the screen?

combine getX with getWidth, then getX/getwidth and you'll get the relative position

Related

How would I randomize where sprites appear on the screen in GameMaker?

I am making a shooter in which enemies (sprite) come into the game from random locations in the game window at a pre-set rate. I was wondering how I could do this in GameMaker 2.
I am able to make them appear at fixed locations, but am unable to figure out how to make the location random within a given boundary.
First off: Don't use just sprites for enemies, as sprites are just the image without functionality behind. If you want to add functionality to a sprite, then use objects instead (and assing a sprite to that object). GameMaker is Object-oriƫnted, so understanding objects is a core mechanic to understand it's functionality.
Once you have an object, then use a random() value
With this, you can set a value to set a value of which random number it should make, between 0 and the value you set. (If you want to use a different minimal value, use random_range(). )
For example in the Step Event:
var randomx = random(100); //this will choose a random decimal number between 0 and 100
The value I filled in is 100, but in your case, it should be the maximum width of your game screen.
You can then continue to use that randomx for the x position where you spawn your enemies. (and then set the y position to 0 to make them appear on top of the screen)
This random number will be a decimal, though that's not important in your scenario, but keep it in mind when you want to compare a random number with an integer number, that it'll need to be rounded first.
Source: https://manual.yoyogames.com/GameMaker_Language/GML_Reference/Maths_And_Numbers/Number_Functions/random.htm

High RMS error while "online" cv:stereoCalibration

I have two cameras setted horizontally (close to each other). I have left camera cam1 and right camera cam2.
First I calibrate cameras (I want to calibrate 50 pairs of images):
I calibrate both cameras separetely using cv::calibrateCamera()
I calibrate stereo using cv::stereoCalibrate()
My questions:
In stereoCalibrate - I assumed that the order of cameras data is important. If data from left camera should be the imagePoints1 and from right camera it should be imagePoints2 or vice versa or it doesn't matters as long as order of cameras is the same in every point of program?
In stereoCalibrate - I get RMS error around 15,9319 and average reprojection error around 8,4536. I get that values if I use all images from cameras. In other case: first I save images, I select pairs where whole chessboard is visible (all of chessborad's squares is in camera view and every square is visible in its entirety) I get RMS around 0,7. If that means that only offline calibration is good and if I want to calibrate camera I should select good images manually? Or there is some way to do calibration online? By online I mean that I start capture view from camera and on every view I found chessboard corners and after stop capture view from camera I calibrate camera.
I need only four values of distortion but I get five of them (with k3). In old api version cvStereoCalibrate2 I got only four values but in cv::stereoCalibrate I don't know how to do this? Is it even possible or the only way is to get 5 values and use only four of them later?
My code:
Mat cameraMatrix[2], distCoeffs[2];
distCoeffs[0] = Mat(4, 1, CV_64F);
distCoeffs[1] = Mat(4, 1, CV_64F);
vector<Mat> rvec1, rvec2, tvec1, tvec2;
double rms1 = cv::calibrateCamera(objectPoints, imagePoints[0], imageSize, cameraMatrix[0], distCoeffs[0],rvec1, tvec1, CALIB_FIX_K3, TermCriteria(
TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON));
double rms2 = cv::calibrateCamera(objectPoints, imagePoints[1], imageSize, cameraMatrix[1], distCoeffs[1],rvec2, tvec2, CALIB_FIX_K3, TermCriteria(
TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON));
qDebug()<<"Rms1: "<<rms1;
qDebug()<<"Rms2: "<<rms2;
Mat R, T, E, F;
double rms = cv::stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1],
cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, E, F,
TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5),
CV_CALIB_FIX_INTRINSIC+
CV_CALIB_SAME_FOCAL_LENGTH);
I had a similar problem. My problem was that I was reading the left images and the right images by assuming that both were sorted. Here a part of the code in Python
I fixed by using "sorted" in the second line.
images = glob.glob(path_left)
for fname in sorted(images):
img = cv2.imread(fname)
gray1 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners1 = cv2.findChessboardCorners(gray1, (n, m), None)
# If found, add object points, image points (after refining them)
if ret == True:
i = i + 1
print("Cam1. Chess pattern was detected")
objpoints1.append(objp)
cv2.cornerSubPix(gray1, corners1, (5, 5), (-1, -1), criteria)
imgpoints1.append(corners1)
cv2.drawChessboardCorners(img, (n, m), corners1, ret)
cv2.imshow('img', img)
cv2.waitKey(100)
The only thing why is the order of cameras/image sets important is the rotation and translation you get from stereoCalibrate function. The image set you put into the function as first is taken as the base. So the rotation and translation you get is how is the second camera translated and rotated from the first camera. Of course you can just reverse the result, which is the same as switching image sets. This of course holds only if the images in both sets are corresponding to each other (their order).
This is a bit tricky, but there are few reasons why you are getting this big RMS error.
First, I'm not sure how you detect your chessboard corners, but if the whole chessboard is not visible and you provide valid chessboard model, findChessboardCorners should return false as it does not detect the chessboard. So you're able to automatically (=online) omit these "chessless" images. Of course you have to throw away also the image from second camera, even if that one is valid, to preserve correct order in both sets.
Second option is to back-project all corners for each image and calculate reprojection error for all images separately (not only for whole calibration). Then you can select, for example, best 3/4 images by this error and recalculate calibration without outliers.
Other reason could be the time sync between snapping images from 2 cameras. If the delay is big and you move with the chessboard continuously, you're actually trying to match projections of slightly translated chessboard.
If you want robust online version I'm afraid you will end up with the second option, as it helps you also get rid of blurred images, wrong detections due to light conditions and so. You just need to set the threshold (how many images you will cut of as outliers) carefully to not throw away valid data.
I'm not that sure in this field, but I would say you can calculate 5 of them and use only four coz it looks like you just cut off higher order of Taylor series. But I cannot guarantee it's true.

Function to map an image to 3D point by point

I am trying to map my image point by point to 3 dimensional space.
For example, if my original image has intensity of 100 at location X, I want to plot this point in 3D location Y with intensity of 100. I want to repeat this steps for every point/pixel, and get a final image. My biggest problem is that I want to do it point by point.
I appreciate any comments or advice. Thank you.
=======================
p.s.
As I was writing this question, I just came up with an idea. I know how to print 'whole' image into certain location/shape in 3D by using warp() function. Instead of using my whole image as an argument to warp function, if I give one point intensity value and one 3D point as arguments for warp function, and repeat this steps for every image point, will I get a descent looking final image in 3D? If there is a better function to use, please let me know.
Sounds like you are looking for scatter3:
I = imread('cameraman.tif');
[x y]=meshgrid(1:size(I,1), 1:size(I,2));
scatter3(x(:),y(:),I(:),15,I(:),'filled');
axis tight; colormap gray
And this is what you get (after some changes to view point):
PS,
I used a single scatter3 command to plot all the points at once. You may (I have no idea why you would like to do so) do it one by one
figure;
for ii=1:numel(x)
scatter( x(ii), y(ii), I(ii), 15, I(ii), 'filled');
hold on; % need this!
end
axis tight; colormap gray;

How to store mouse coordinates in GDI+?

I wanted to ask if a POINT structure is the only way to store mouse coordinates ? My problem with this way is that when you declare:
POINT ps[20];
you need to have a fixed size array. What if I need to store more points ? Is there a way to make it dynamic (to resize itself when it reaches the limit). I want to use this array to get mouse coordinates and then draw lines in WM_PAINT: message. thx
case WM_MOUSEMOVE:
{
pt[i].x=LOWORD(lparam);
pt[i++].y=HIWORD(lparam);
--------
}
You would use an array of POINT structures.

Element x, y, width, height translation to different dimensions

My math must be very rusty. I have to come up with an algorithm that will take a known:
x
y
width
height
of elements in a document and translate them to the same area on a different hardware device. For example, The document is being created for print (let's assume 8.5"x11" letter size) and elements inside of this document will then be transferred to a proprietary e-reader.
Also, the known facts about the e-reader, the screen is 825x1200 pixels portrait. There are 150 pixels per inch.
I am given the source elements from the printed document in points (72 Postscript points per inch).
So far I have an algorithm that get's close, but it needs to be exact, and I have a feeling I need to incorporate aspect ratio into the picture. What I am doing now is:
x (in pixels) = ( x(in points)/width(of document in points) ) * width(of ereader in pixels)
etc.
Any clues?
Thanks!
You may want to revert the order of your operations to reduce the effect of integer truncation, as follows:
x (in pixels) = x(in points) * width(of ereader in pixels) / width(of document in points)
I don't think you have an aspect ratio problem, unless you forgot to mention that your e-reader device has non-square pixels. In that case you will have a different amount of pixels per inch horizontally and vertically on the device's screen, so you will use the horizontal ppi for x and the vertical ppi for y.
assuming your coordinates are integer numbers, the formula x/width is truncating (integer division). What you need is to perform the division/multiplication in floating point numbers, then truncate. Something like
(int)(((double)x)/width1*width2)
should do the trick (using C-like conversion to double and int)

Resources