Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am trying to make a raycasting engine in assembly and i have a problem.
drawing textures does not seem to work properly.
This is how it looks like:
in the for loops of finding collision with the wall , if collision was found I took the floating point part of the x or the y and used it to calculate where on the texture to draw.
I have tried debugging and I have found that the problem could be that the final texture x is the same few times but you can see in the pictures that it works almost fine when looking from the side so i don't think it's the problem.
The wanted result is just that the textures will be drawn correctly without those distortions.
I think the problem is somewhere in the code here:
mov ebx,WINDOW_HEIGHT / 2
sub ebx,eax
mov eax,height
mov screenypos,ebx
dec typeofwall
movss xmm0,floatingpoint
mulss xmm0,FP4(64.0f)
mov eax,typeofwall
cvtsi2ss xmm1,eax
mulss xmm1,FP4(64.0f)
addss xmm0,xmm1
movss tempvarr,xmm0
invoke FLOOR,tempvarr
cvtss2si eax, xmm0
mov finaltexturex,eax
;invoke
BUILDRECT,screenpos,screenypos,linewidth,height,hdc,localbrush
invoke DrawImage,hdc,wolftextures,screenpos,screenypos,finaltexturex,0,linewidth,64,linewidth,height
Try to print first for each column the "hit" coordinates, and which one would you use for texturing (keep in mind, that you have to use either map_x or map_y axis for texturing, depending on which grid line the ray intersected first with the wall and from which direction).
Now I got other idea... are you even using the byte map[16][16]; or something similar for walls definitions (Wolf3D 2.5D ray casting), or is this semi-polygon map system, calculating intersections with segments (DOOM 2.5D perspective BSP 2D-edge drawer (not ray casting at all in original DOOM!))?
If you are doing the Wolf3D raycaster, be aware you have to clean up your intersection formulas a lot, and decide wisely which part of calculation you do when, as bad order of calculation may quickly cumulate considerable amount of accuracy error, leading to quirks like "holes" in walls (when for single pixel column you miss the intersection with wall), etc.
With floating point numbers you are even more susceptible to unexpected accuracy problems, as the accuracy encoded in bits shifts fast by exponent (so around 0.0,0.0 coordinates you have quite better accuracy, than around 1e6,1e6 coordinates on map).
When done properly, it should look like "easy" stuff. I once made quick and dirty version in Pascal in one afternoon (as an example for a friend, who was trying to learn Pascal). But it's as easy to do it wrong (for example Bethesda's first Elder Scrolls "ARENA" had horrible intersection calculation, with walls y-position being jagged a lot). And usually the not-proper calculation has not only worse accuracy, but also almost always involves more calculation operations, so it's slower.
Use paper and pencil to draw it all down (map grid, projection plane, triangulate around with values you have, look how you can minimize setup phase per screen-column-x, as minimum amount of calculation = highest accuracy).
(the answer is quite general, because there's almost no code to check (the code posted looks OK to me)).
Related
Assume I have a model that is simply a cube. (It is more complicated than a cube, but for the purposes of this discussion, we will simplify.)
So when I am in Sketchup, the cube is Xmm by Xmm by Xmm, where X is an integer. I then export the a Collada file and subsequently load that into threejs.
Now if I look at the geometry bounding box, the values are floats, not integers.
So now assume I am putting cubes next to each other with a small space in between say 1 pixel. Because screens can't draw half pixels, sometimes I see one pixel and sometimes I see two, which causes a lack of uniformity.
I think I can resolve this satisfactorily if I can somehow get the imported model to have integer dimensions. I have full access to all parts of the model starting with Sketchup, so any point in the process is fair game.
Is it possible?
Thanks.
Clarification: My app will have two views. The view that this is concerned with is using an OrthographicCamera that is looking straight down on the pieces, so this is really a 2D view. For purposes of this question, after importing the model, it should look like a grid of squares with uniform spacing in between.
UPDATE: I would ask that you please not respond unless you can provide an actual answer. If I need help finding a way to accomplish something, I will post a new question. For this question, I am only interested in knowing if it is possible to align an imported Collada model to full pixels and if so how. At this point, this is mostly to serve my curiosity and increase my knowledge of what is and isn't possible. Thank you community for your kind help.
Now you have to learn this thing about 3D programming: numbers don't mean anything :)
In the real world 1mm, 2.13cm and 100Kg specify something that can be measured and reproduced. But for a drawing library, those numbers don't mean anything.
In a drawing library, 3D points are always represented with 3 float values.You submit your points to the library, it transforms them in 2D points (they must be viewed on a 2D surface), and finally these 2D points are passed to a rasterizer which translates floating point values into integer values (the screen has a resolution of NxM pixels, both N and M being integers) and colors the actual pixels.
Your problem simply is not a problem. A cube of 1mm really means nothing, because if you are designing an astronomic application, that object will never be seen, but if it's a microscopic one, it will even be way larger than the screen. What matters are the coordinates of the point, and the scale of the overall application.
Now back to your cubes, don't try to insert 1px in between two adjacent ones. Your cubes are defined in terms of mm, so try to choose the distance in mm appropriate to your world, and let the rasterizer do its job and translate them to pixels.
I have been informed by two co-workers that I tracked down that this is indeed impossible using normal means.
Let's say I throw a cube and it falls on the ground with 45, 45, 0 rotations (on it's corner). Now in a 'perfect' world, the cube wouldn't consist of atoms, and it would be 'perfect', there would be no wind (or any lesser movement of air) etc. And in the end the cube would stay on it's corner. But we don't live in such a boring 'perfect' world, and the physics emulators should take this into account and they do quite nicely. So the cube falls on it's side.
Now my question is, how random is that? Does the cube always fall on it's left side? Or maybe it depends on Math.random()? Or maybe it depends on current time? Or maybe it depends on some custom random function, that takes not time, but parameters of objects on stage, as it's seed?
Why I am making this question is, that if the randomness wasn't based on time, I probably could cache results of collisions (when objects stop) for their particular initial position to optimize my animation? If I cached the whole animation, I wouldn't care, but If I only cached the end result, I could be surprised that two exactly same situations can evaluate to different results and then the other wouldn't fit my cached version.
I could just check the source for Math.random functions, but that would be a shallow method, as the code is surely optimized, and as such sophisticated randomization isn't needed there, I personally would use something like fallLeft = time % 2. Also, the code could change with time.
Couldn't find anything about AwayPhysics here, so probably it's something new for everyone - that's why I added the parentheses part; the world won't explode if I'll assume one thing and it happens that in AwayPhysics it's opposite, just what's the standard?
I, personally, don't use pre-made physics engines. Instead, when I want one, I write it myself, so I know how they work inside. The reason that the cube tips over is because the physics engine is inaccurate. It can only approximate things like trig functions, square-roots, integrals, et cetera, so instead it estimates them to a few digits of accuracy (15 in Javascript). If you have the case of, say, two perfect circles stacked on top of each other, the angle between them (pi/2) would slowly change to some seemingly random value based on the way the program was approximating pi. Eventually, this tiny error would grow as the circles rolled off of each other, and the top one would just fall. So, in answer to your question, the cube should fall the same way each time if thrown in the same way, but the direction in which it always fell would be effectively random.
I am thinking of implement a image processing based solution for industrial problem.
The image is consists of a Red rectangle. Inside that I will see a matrix of circles. The requirement is to count the number of circles under following constraints. (Real application : Count the number of bottles in a bottle casing. Any missing bottles???)
The time taken for the operation should be very low.
I need to detect the red rectangle as well. My objective is to count the
items in package and there are no
mechanism (sensors) to trigger the
camera. So camera will need to capture
the photos continuously but the
program should have a way to discard
the unnecessary images.
Processing should be realtime.
There may be a "noise" in image capturing. You may see ovals instead of circles.
My questions are as follows,
What is the best edge detection algorithm that matches with the given
scenario?
Are there any other mechanisms that I can use other than the edge
detection?
Is there a big impact between the language I use and the performance of
the system?
AHH - YOU HAVE NOW TOLD US THE BOTTLES ARE IN FIXED LOCATIONS!
IT IS AN INCREDIBLY EASIER PROBLEM.
All you have to do is look at each of the 12 spots and see if there is a black area there or not. Nothing could be easier.
You do not have to do any edge or shape detection AT ALL.
It's that easy.
You then pointed out that the box might be rotatated, things could be jiggled. That the box might be rotated a little (or even a lot, 0 to 360 each time) is very easily dealt with. The fact that the bottles are in "slots" (even if jiggled) massively changes the nature of the problem. You're main problem (which is easy) is waiting until each new red square (crate) is centered under the camera. I just realised you meant "matrix" literally and specifically in the sentence in your original questions. That changes everything totally, compared to finding a disordered jumble of circles. Finding whether or not a blob is "on" at one of 12 points, is a wildly different problem to "identifying circles in an image". Perhaps you could post an image to wrap up the question.
Finally I believe Kenny below has identified the best solution: blob analysis.
"Count the number of bottles in a bottle casing"...
Do the individual bottles sit in "slots"? ie, there are 4x3 = 12 holes, one for each bottle.
In other words, you "only" have to determine if there is, or is not, a bottle in each of the 12 holes.
Is that correct?
If so, your problem is incredibly easier than the more general problem of a pile of bottles "anywhere".
Quite simply, where do we see the bottles from? The top, sides, bottom, or? Do we always see the tops/bottoms, or are they mixed (ie, packed top-to-tail). These issues make huge, huge differences.
Surf/Sift = overkill in this case you certainly don't need it.
If you want real time speed (about 20fps+ on a 800x600 image) I recommend using Cuda to implement edge detection using a standard filter scheme like sobel, then implement binarization + image closure to make sure the edges of circles are not segmented apart.
The hardest part will be fitting circles. This is assuming you already got to the step where you have taken edges and made sure they are connected using image closure (morphology.) At this point I would proceed as follows:
run blob analysis/connected components to segment out circles that do not touch. If circles can touch the next step will be trickier
for each connected componet/blob fit a circle or rectangle using RANSAC which can run in realtime (as opposed to Hough Transform which I believe is very hard to run in real time.)
Step 2 will be much harder if you can not segment the connected components that form circles seperately, so some additional thought should be invested on how to guarantee that condition.
Good luck.
Edit
Having thought about it some more, I feel like RANSAC is ideal for the case where the circle connected components do touch. RANSAC should hypothetically fit the circle to only a part of the connected component (due to its ability to perform well in the case of mostly outlier points.) This means that you could add an extra check to see if the fitted circle encompasses the entire connected component and if it does not then rerun RANSAC on the portion of the connected component that was left out. Rinse and repeat as many times as necessary.
Also I realize that I say circle but you could just as easily fit an ellipse instead of circles using RANSAC.
Also, I'd like to comment that when I say CUDA is a good choice I mean CUDA is a good choice to implement the sobel filter + binirization + image closing on. Connected components and RANSAC are probably best left to the CPU, but you can try pushing them onto CUDA though I don't know how much of an advantage a GPU will give you for those 2 over a CPU.
For the circles, try the Hough transform.
other mechanisms: dunno
Compiled languages will possibly be faster.
SIFT should have a very good response to circular objects - it is patented, though. GLOHis a similar algorithm, but I do not know if there are any implementations readily available.
Actually, doing some more research, SURF is an improved version of SIFT with quite a few implementations available, check out the links on the wikipedia page.
Sum of colors + convex hull to detect boundary. You need, mostly, 4 corners of a rectangle, and not it's sides?
No motion, no second camera, a little choice - lot of math methods against a little input (color histograms, color distribution matrix). Dunno.
Java == high memory consumption, Lisp == high brain consumption, C++ == memory/cpu/speed/brain use optimum.
If the contrast is good, blob analysis is the algorithm for the job.
Greetings,
I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map:
(I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.)
I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant).
Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile.
That's enough context; my question is:
Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross?
That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on.
BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles.
I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( .
Thanks in advance for any answer
Until one of the clever SOers turns up, here's my dumb solution. I'll explain it in 2D 'cos that makes it easier to explain, but it will generalise to 3D easily enough. I think any attempt to try to work this entirely in cell index space is doomed to failure (though I'll admit it's just what I think and I look forward to being proved wrong).
So you need to define a function to map from cartesian coordinates to cell indices. This is straightforward, if a little tricky. First, decide whether point(0,0) is the bottom left corner of cell(0,0) or the centre, or some other point. Since it makes the explanations easier, I'll go with bottom-left corner. Observe that any point(x,floor(y)==0) maps to cell(floor(x),0). Indeed, any point(x,even(floor(y))) maps to cell(floor(x),floor(y)).
Here, I invent the boolean function even which returns True if its argument is an even integer. I'll use odd next: any point point(x,odd(floor(y)) maps to cell(floor(x-0.5),floor(y)).
Now you have the basics of the recipe for determining lines-of-sight.
You will also need a function to map from cell(m,n) back to a point in cartesian space. That should be straightforward once you have decided where the origin lies.
Now, unless I've misplaced some brackets, I think you are on your way. You'll need to:
decide where in cell(0,0) you position point(0,0); and adjust the function accordingly;
decide where points along the cell boundaries fall; and
generalise this into 3 dimensions.
Depending on the size of the playing field you could store the cartesian coordinates of the cell boundaries in a lookup table (or other data structure), which would probably speed things up.
Perhaps you can avoid all the complex math if you look at your problem in another way:
I see that you only shift your blocks (alternating) along the first axis by half the blocksize. If you split up your blocks along this axis the above example will become (with shifts) an (9x4x4) simple cartesian coordinate system with regular stacked blocks. Now doing the raytracing becomes much more simple and less error prone.
I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate.
Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this.
I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls?
Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option...
I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.
I know this is a bit old, but I recently implemented Marching Cubes based on much the same source. There is a LOT of inefficiency here. At a minimum if you were doing something like
for (int x=0; x<densityArrayWidth; x++)
for (int z=0; z<densityArrayLength; z++)
for (int y=0; y<densityArrayHeight; y++)
Polygonize(Gridcell, isolevel, Triangles)
Look at how many times you'd be reallocating the edgeTable and Tritable! Those immediately need to move out to the overall class. I ditched the gridCell object as well, going directly from the points/values to the triangles.
In short it isn't just the algorithmic complexity, memory allocations (and in the base this does a huge amount of them) take time also.
Just in case anyone else ends up here, dead-space elimination through a coarser sampling rate makes virtually no difference at all. Any remotely safe (ie: allowing a border for sampling artifacts) coarser sampling ends up grabbing most of the grid anyway in any remotely non-trivial field.
Speeding up the underlying field evaluation (with heavy memoisation) seemed to mostly solve the performance problems.
Try marching tetrahedra instead -- the math is simpler, allowing you to consider fewer cases per cell.
each cube has 12 edges, if you go through each cube and find 12 intersection points, you are doing 4 times too many calculations for intersection points- you have to only use 3 edges in the bottom left corner of each cube, with an extra row in the top right corner of the zone, and then use a special upgrade to access all the values that you have found. I'm going to do a topic on this because it needs to be discussed and it's complicated.
Also, testing for areas in space that need polygons, by assessing the ISO level using Octree, and skipping areas far from the ISO level.
I had a look at propagation, but it isn't that reliable and efficient.