How would you draw a vertical line in an LC-3 simulator like Pennsim? - lc3

I know how to draw horizontal lines by doing
LOOP1 STR R5, #0 ;starting ixel
ADD R5, R5, #1 ;increment of pixel on coordinate
ADD R7, R7 #-1 ;decrement to the desired length counter
BRp LOOP1 ;keeps looping until register with desired length is zero
Obviously the registers will be different for whatever the user chooses for the counter and coordinate locations but these were just numbers from my previous code. What would be a way to manipulate that code to draw a vertical line? I'm not completely comfortable with the formatting of the code on this website yet either so please excuse me if I am wrong in a few areas.

The difference between horizontal line and vertical line is how we increment the pixel position.
Let's note that a two dimensional coordinate can (and must) be mapped to a one dimensional system by a formula like rowCoordinate * columSize + columnCoordinate. (Memory is a one-dimensional system, it doesn't have two dimensions so we use this kind of mapping.)
So, as you have shown, we can draw a horizontal line by traversing each pixel from (row,0) to (row,columnSize-1). By the mapping formula above, we go from
(row,c) to (row,c+1) by simply adding 1 to the address of the pixels.
To draw a vertical line, we want to vary the row position and keep the column position fixed as in: from (0,col) to (rowSize-1,col).
According to the mapping of 2d to 1d, that means going from
(r,col) to (r+1,col) we need to increment by columnSize instead of 1.

Related

Unexpected behaviour when using OpenCV remap in Python

I am encountering unexplained behaviour when using the cv2.remap function in Opencv. I have made a function that translates the y coordinate of each pixel upwards by a number that differs on each x-coordinate. The number is calculated according to an underlying curve that starts from x=0 and ends at x=640 (for a 360 x 640 picture)
About a week ago I posted a question on the same subject but with a different problem - the top part of the image was cropped. This was answered and I understood that the problem was that several pixels from the original image were mapped to the same positions in the new image and therefore the pixels that were first mapped to the top parts were overwritten by others from the lower parts.
However the issue I am facing now is a bit different which is why I posted a new question. In this instance the problem is that since I am lifting all y coordinates upwards I would expect to have the bottom section of the image blank. This is achieved with the below piece of code
original = cv2.imread('C:\\Users\\User\\Desktop\\alaskan-landscaps3.jpg')
y_size,x_size=original.shape[:2]
map_x = np.zeros(original.shape[:2], np.float32)
map_y = np.zeros(original.shape[:2], np.float32)
for i in range(0, y_size):
for j in range(0,x_size):
map_y[i][j]=i+yvalues[j]
map_x[i][j] = j
where yvalues[j] is the y-coordinate of the curve at x-coordinate j. Sure enough, when printing the y-mappings for the first 10 rows of the map_y array at x=383 the lowest number is 199.102
for j in range(0, 360):
print('At x=383,y=' + str(j) + ' mapping of y-coordinate is:' + str(map_y[j][383]))
At x=383,y=0 mapping of y-coordinate is:199.102
At x=383,y=1 mapping of y-coordinate is:200.102
At x=383,y=2 mapping of y-coordinate is:201.102
At x=383,y=3 mapping of y-coordinate is:202.102
At x=383,y=4 mapping of y-coordinate is:203.102
At x=383,y=5 mapping of y-coordinate is:204.102
At x=383,y=6 mapping of y-coordinate is:205.102
When I use a more complicated warping function I get the below results for the same first 10 lines - the idea being that a number of consecutive pixels in the vertical direction should be mapped to the same pixel in the new image
At x=383,y=0mapping is:199.102
At x=383,y=1mapping is:199.102
At x=383,y=2mapping is:199.102
At x=383,y=3mapping is:199.102
At x=383,y=4mapping is:199.102
At x=383,y=5mapping is:200.102
At x=383,y=6mapping is:200.102
At x=383,y=7mapping is:200.102
At x=383,y=8mapping is:200.102
At x=383,y=9mapping is:201.102
At x=383,y=10mapping is:201.102
However, this time, the image is completely filled in and there is no blank space at the bottom part. You can see both images below:
(the pixelated effect is not what I intended but that's for me to fix :) ). I have checked all the entries of the specific column in the map_y array for x=383 and all values are > 199.102 so it appears that is not the same problem as in the previous case i.e there are no pixels higher up in the column that get mapped to the lower part.
So my question in, what is different between the two mappings to have caused this drastic change? I expected that since the mappings of the second image also start from a high value there would have been a blank space at the bottom as well
Apologies for the long post but I have been trying to work out the reason for a few days and came out none the wiser. Any help will be appreciated.
I think you misunderstand how remap() works.
Let me quote from the OpenCV documentation for the function initUndistortRectifyMap():
"The function actually builds the maps for the inverse mapping algorithm that is used by remap(). That is, for each pixel (u, v) in the destination image, the function computes the corresponding coordinates in the source image."
So your line
At x=383,y=0mapping is:199.102
actually means that the value of pixel (383, 0) in the mapped image is taken from pixel (383, 199.102) in the original image.
A pixel is not mapped, if:
a) the value of map_x[i][j] or map_y[i][j] is -1 (-16 in case of short map type)
or
b) the value is outside the boundaries of the source image.

Game Maker - Touch Event

I´m making my first game in Game Maker.
In the game i need to the user to draw a figure, for example a rectangle, and the game has to recognize the figure. How can i do this?
Thanks!
Well, that is a pretty complex task. To simplify it, you could ask him to place a succession of points, using the mouse coordinates in the click event, and automatically connect them with lines. If you store every point in the same ds_list structure, you will be able to check conditions of angle, distance, etc. This way, you can determine the shape. May I ask why you want to do this ?
The way I would solve this problem is pretty simple. I would create a few variables for each point when someone clicked on one of the points it would equal true. and wait for the player to click on the next point. If the player clicked on the next point i would call in a sprite as a line using image_angle to line both points up and wait for the player to click the next point.
Next I would have a step event waiting to see if all points were clicked and when they were then to either draw a triangle at those coordinates or place an sprite at the correct coordinates to fill in the triangle.
Another way you could do it would be to decide what those points would be and check against mouse_x, and mouse_y to see if that was a point and if it was then do as above. There are many ways to solve this problem. Just keep trying you will find one that works for your skill level and what you want to do.
You need to use draw_rectangle(x1, y1, x2, y2, outline) function. As for recognition of the figure, use point_in_rectangle(px, py, x1, y1, x2, y2).
I'm just wondering around with ideas cause i can't code right now. But listen to this, i think this could work.
We suppose that the user must keep his finger on touchscreen or an event is triggered and all data from the touch event is cleaned.
I assume that in future you could need to recognize other simple geometrical figures too.
1 : Set a fixed amount of pixels of movement defined dependent on the viewport dimension (i'll call this constant MOV from now on), for every MOV you store in a buffer (pointsBuf) the coordinates of the point where the finger is.
2 : Everytime a point is stored you calculate the average of either X and Y coordinates for every point. (Hold the previous average and a counter to reduce time complexity). Comparing them we now can know the direction and versus of the line. Store them in a 2D buffer (dirVerBuf).
3 : If a point is "drastically" different from the most plain average between the X and Y coordinates we can assume that the finger changed direction. This is where the test part of MOV comes critical, we must assure to calculate an angle now. Since only a Parkinsoned user would make really distorted lines we can assume at 95% that we're safe to take the 2nd point that didn't changed the average of the coordinate as vertex and let's say the last and the 2nd point before vertex to calculate the angle. You have now one angle. Test the best error margin of the user to find if the angle is about to be a 90, 60, 45, ecc.. degrees angle. Store in a new buffer (angBuf)
4 : Delete the values from pointsBuf and repeat step 2 and 3 until the user's finger leaves the screen.
5 : if four of the angles are of 90 degrees, the 4 versus and two of the directions are different, the last point is somewhat near (depending from MOV) the first angle stored and the two X lines and the Y lines are somewhat equal, but of different length between them, then you can connect the 4 angles using the four best values next to the 4 coordinates to make perfect rectangular shape.
It's late and i could have forgotten something, but with this method i think you could even figure out a triangle, a circle, ecc..
With just some edit and confronting.
EDIT: If you are really lazy you could instead use a much more space complexity heavy strategy. Just create a grid of rectangles or even triangles of a fixed dimension and check which one the finger has touched, connect their centers after you'have figured out the shape, obviously ignoring the "touched for mistake" ones. This would be extremely easy to draw even circles using the native functions. Gg.

Drawing textures in raycasting engine [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am trying to make a raycasting engine in assembly and i have a problem.
drawing textures does not seem to work properly.
This is how it looks like:
in the for loops of finding collision with the wall , if collision was found I took the floating point part of the x or the y and used it to calculate where on the texture to draw.
I have tried debugging and I have found that the problem could be that the final texture x is the same few times but you can see in the pictures that it works almost fine when looking from the side so i don't think it's the problem.
The wanted result is just that the textures will be drawn correctly without those distortions.
I think the problem is somewhere in the code here:
mov ebx,WINDOW_HEIGHT / 2
sub ebx,eax
mov eax,height
mov screenypos,ebx
dec typeofwall
movss xmm0,floatingpoint
mulss xmm0,FP4(64.0f)
mov eax,typeofwall
cvtsi2ss xmm1,eax
mulss xmm1,FP4(64.0f)
addss xmm0,xmm1
movss tempvarr,xmm0
invoke FLOOR,tempvarr
cvtss2si eax, xmm0
mov finaltexturex,eax
;invoke
BUILDRECT,screenpos,screenypos,linewidth,height,hdc,localbrush
invoke DrawImage,hdc,wolftextures,screenpos,screenypos,finaltexturex,0,linewidth,64,linewidth,height
Try to print first for each column the "hit" coordinates, and which one would you use for texturing (keep in mind, that you have to use either map_x or map_y axis for texturing, depending on which grid line the ray intersected first with the wall and from which direction).
Now I got other idea... are you even using the byte map[16][16]; or something similar for walls definitions (Wolf3D 2.5D ray casting), or is this semi-polygon map system, calculating intersections with segments (DOOM 2.5D perspective BSP 2D-edge drawer (not ray casting at all in original DOOM!))?
If you are doing the Wolf3D raycaster, be aware you have to clean up your intersection formulas a lot, and decide wisely which part of calculation you do when, as bad order of calculation may quickly cumulate considerable amount of accuracy error, leading to quirks like "holes" in walls (when for single pixel column you miss the intersection with wall), etc.
With floating point numbers you are even more susceptible to unexpected accuracy problems, as the accuracy encoded in bits shifts fast by exponent (so around 0.0,0.0 coordinates you have quite better accuracy, than around 1e6,1e6 coordinates on map).
When done properly, it should look like "easy" stuff. I once made quick and dirty version in Pascal in one afternoon (as an example for a friend, who was trying to learn Pascal). But it's as easy to do it wrong (for example Bethesda's first Elder Scrolls "ARENA" had horrible intersection calculation, with walls y-position being jagged a lot). And usually the not-proper calculation has not only worse accuracy, but also almost always involves more calculation operations, so it's slower.
Use paper and pencil to draw it all down (map grid, projection plane, triangulate around with values you have, look how you can minimize setup phase per screen-column-x, as minimum amount of calculation = highest accuracy).
(the answer is quite general, because there's almost no code to check (the code posted looks OK to me)).

Detecting individual images in an array of images

I'm building a photographic film scanner. The electronic hardware is done now I have to finish the mechanical advance mechanism then I'm almost done.
I'm using a line scan sensor so it's one pixel width by 2000 height. The data stream I will be sending to the PC over USB with a FTDI FIFO bridge will be just 1 byte values of the pixels. The scanner will pull through an entire strip of 36 frames so I will end up scanning the entire strip. For the beginning I'm willing to manually split them up in Photoshop but I would like to implement something in my program to do this for me. I'm using C++ in VS. So, basically I need to find a way for the PC to detect the near black strips in between the images on the film, isolate the images and save them as individual files.
Could someone give me some advice for this?
That sounds pretty simple compared to the things you've already implemented; you could
calculate an average pixel value per row, and call the resulting signal s(n) (n being the row number).
set a threshold for s(n), setting everything below that threshold to 0 and everything above to 1
Assuming you don't know the exact pixel height of the black bars and the negatives, search for periodicities in s(n). What I describe in the following is total overkill, but that's how I roll:
use FFTw to calculate a discrete fourier transform of s(n), call it S(f) (f being the frequency, i.e. 1/period).
find argmax(abs(S(f))); that f represents the distance between two black bars: number of rows / f is the bar distance.
S(f) is complex, and thus has an argument; arctan(imag(S(f_max))/real(S(f_max)))*number of rows will give you the position of the bars.
To calculate the width of the bars, you could do the same with the second highest peak of abs(S(f)), but it'll probably be easier to just count the average length of 0 around the calculated center positions of the black bars.
To get the exact width of the image strip, only take the pixels in which the image border may lie: r_left(x) would be the signal representing the few pixels in which the actual image might border to the filmstrip material, x being the coordinate along that row). Now, use a simplistic high pass filter (e.g. f(x):= r_left(x)-r_left(x-1)) to find the sharpest edge in that region (argmax(abs(f(x)))). Use the average of these edges as the border location.
By the way, if you want to write a source block that takes your scanned image as input and outputs a stream of pixel row vectors, using GNU Radio would offer you a nice method of having a flow graph of connected signal processing blocks that does exactly what you want, without you having to care about getting data from A to B.
I forgot to add: Use the resulting coordinates with something like openCV, or any other library capable of reading images and specifying sub-images by coordinates as well as saving to new images.

OpenCV find all significant edges along a line

I have an image that I used to analyze in LabView using a method called Rake. Basically, what that method does is it finds all the significant edges along parallel lines on an image.
http://zone.ni.com/reference/en-XX/help/370281P-01/imaqvision/imaq_rake_3/ (as seen on the last image at the bottom of the link).
The beauty of this function is that it will give you all edge points that are larger than a certain edge strength, and each edge will only generate one edge point (thickness of the edge line is 1 pixel)
I want to use OpenCV to do something similar. The way I could imagine for doing this is
- deconstructing the Canny operator with a filter of my choice,
- hysterisis thresholding of the edge values with two thresholds
- followed by nonmaxima suppression
- read the pixels along that line and mark all pixels that are larger than my threshold
the problem is that the canny comes as a bundle and I cant find the nonmaxima suppression function by itself.
Does anybody know of a way to do something similar to the operation I've described?
Thanks
Not sure if I understand this question fully, but about the unbundled non-maximum suppression part:
One simple way for 2d non-maximum suppression is this:
dilate the image. Dilation in OpenCV sets the value of each pixel to the max() of the local neighborhood. Repeat a few times or use a larger kernel to get the desired radius.
Then compare the dilated image with the original and set all pixels with differing values to zero.
The remaining pixels are local maxima.
# some code I once used in OpenCV/Python
# given an image, sets all pixels to zero, unless they are local maxima
def supressNonMaxima(img):
localMax = cvCreateImage (cvGetSize(img), IPL_DEPTH_16U, 1)
cvDilate(img, localMax, null, 3) # max() with radius of 3
mask = cvCreateImage( cvGetSize(img), 8, 1)
cvCmp(img, localMax, mask, CV_CMP_LT)
cvSet(img,0,mask)

Resources