Delphi - Programming Algorithm to find next point to draw component - algorithm

Good day All,
I am developing a program that will resemble warehouse in a "graphical" manner.
The representation is very basic and I am using:
1. TGroupBox (as parent container)
2. TPanel (as the shelves in the "warehouse" aka parent container.
The challenge I need help with.
If I have 2 or more Groupboxes to draw dynamically.
How can I determine the next Point to start drawing.
So far my code is working very well with only 2 Groupboxes but I need an intelligent algorithm or way to calculate the next point on my canvas to draw the warehouse.
I am sure I can do this by scanning pixel for pixel and check of another components is at that point but there must be a more cleverly algorithm that can assist :)
Also remember the Groupboxes could be rectangles, square. IOW: Height & Width can differ.
How can I do this?
EDIT1:
Sorry my explanation might be lacking. I am not painting them myself. Just creating them in code and positioning them dynamically. Unfortunately I can not post a picture due to reputation points too low. I will try to explain better. Let say I have 3 warehouses. Warehouse 1 contains 2 rows and 3 columns (shelves). And Warehouse 2 contains 20 rows and 5 columns. I have created Warehouse 1 (Groupbox Component) in code, positioned it at point 0, 0 on my parent control. Warehouse 2 can now either be created to the right or bottom of Warehouse 1. Depending on what makes most sense (screen real estate) in terms of the open space available on parent. And then also for Warehouse 3, how can I in code determine where to place it on my parent container? Position? And obviously I cant just always assume to draw the next warehouse to the bottom of the previous one. The previous warehouse might for example - only contain 1 row and 1 shelve which will make it a very small in size and and therefore it will make sense to draw next warehouse to it's right. But if previous warehouse's width is large then it would make sense to draw next warehouse to its bottom.
http://www.programmer.co.za/downloads/SOW.png http://www.programmer.co.za/downloads/SOW.png

If you cant find any good answers, i think you you can do some research on "cut list" algorithms. Take a look at DelphiForFun site.

Related

Algorithm to interpolate any view from individual view mapped on a sphere

I'm trying to create a graphics engine to show point cloud data (in first person for now). My idea is to precalculate individual views from different points in the space we are viewing and mapping them into a sphere. Is it possible to interpolate that data to determine the view from any point on the space?
I apologise for my english and my poor explanation, but I'm can't figure out another way to explain. If you don't understand my question I'll be happy to reformulate it if it's needed.
EDIT:
I'll try to explain it with an example
Image 1:
Image 2:
In these images we can see two different views of the pumpkin (imagine that we have a sphere map of the 360 view in both cases). In the first case we have a far view of the pumpkin and we can see the surroundings of it and imagine that we have a chest right behind the character (we'd have a detailed view of the chest if we looked behind).
So, first view: surroundings and low detail image of the pumpkin and good detail of the chest but without the surroundings.
In the second view we have the exact opposite: a detailed view of the pumpkin and a non detailed general view of the chest (still behind us).
The idea would be to combine the data from both views to calculate every view between them. So going towars the pumpin would mean to streach the points of the first image and to fill the gaps with the second one (forget all the other elements, just the pumpkin). At the same time, we would comprime the image of the chest and fill the surroundings with the data from the general view of the second one.
What I would like is to have an algorithm that dictates that streching, compriming and comination of pixels (not only forward and backwards, also diagonaly, using more than two sphere maps). I know it's fearly complicated, I hope I expressed myself well enough this time.
EDIT:
(I'm using a lot the word view and I think that's part of the problem, here is the definition of what I mean with "view": "A matrix of colored points, where each point corresponds to a pixel on the screen. The screen only displays part of the matrix each time (the matrix would be the 360 sphere and the display a fraction of that sphere). A view is a matrix of all the possible points you can see by rotating the camera without moving it's position." )
Okay, it seems that you people still don't understand the concept around it. The idea is to be able to display as much detailed enviroments as possible by "precoocking" the maximun amount of data before displaying it at real time. I'll deal with the preprocesing and the compression of data for now, I'm not asking about that. The most "precoocked" model would be to store the 360 view at each point on the space displayed (if the character is moving at, for example, 50 points per frame, then store a view each 50 points, the thing is to precalculate the lighting and shading and to filter the points that wont be seen, so that they are not processed for nothing). Basicaly to calculate every possible screenshot (on a totally static enviroment). But of course, that's just ridiculous, even if you could compress a lot that data it would still be too much.
The alternative is to store only some strategic views, less frecuently. Most of the points are repeated in each frame if we store all the possible ones. The change in position of the points on screen is also mathematically regular. What I'm asking is that, a algorithm to determine the position of each point on the view based on a fiew strategic viewpoints. How to use and combinate data from strategic views on different possitions to calculate the view in any place.

Tracking user defined points with OpenCV

I'm working on a project where I need to track two points in an image. So far, the best way I have of identifying these points is to get the user to click on them when the program is first run. I'm using the Lucas-Kanade Pyramid method built into OpenCV (documented here, but as is to be expected, this doesn't work too well. Is there a better alternative algorithm for tracking points in OpenCV, or alternatively some other way of verifying the points I already have?
I'm currently considering using GoodFeaturesToTrack, and getting the distance from each point to the one that I want to track, and maybe some sort of vector pointing out the relationship between the two points, and using this information to determine my new point.
I'm looking for suggestions of ways to go about this, not necessarily code samples.
Thanks
EDIT: I'm tracking small movements, if that helps
If you look for a solution that is implemented in opencv the pyramidal Lucas Kanade (PLK) method is quit good, else I would prefer a Particle Filter based tracker.
To improve your tracking performance with the PLK be sure that you have set up the parameters correctly. E.g. for large motion you need a level at ca. 3 or 4. The window should not be to small ( I prefer 17x17 to 27x27). Also keep in mind that the methods needs textured areas to be able to track the points. That means corner like image content (aperture problem).
I would propose to seed a set of points (ps) in a grid around the points (P) you want to track. And than use a foreward - backward threshold to reject falsly tracked points. The motion of your points (P) will be computed by the mean motion of the particular residual point sets (ps).
The foreward backward confidence is computes by estimating the motion from frame 1 to frame 2. (ptList1 -> ptList2). And that from frame 2 to frame 1 with the points of ptList2 (ptList2 -> ptListRef). Motion vectors will be rejected if (|| ptRef - pt1 || > fb_threshold).

Algorithm to place images next to stories in a constrained 4-column newspaper grid

This question is about any code or pointer that facilitates layout of images "near" to stories within a constrained and well-defined grid.
These are the basics and inputs:
The entire object is called a newspaper. It has a set of stories (of varying text length)
Each story can either have an image associated to it, or not
The newspaper is laid out into 4 columns automatically. Text flows from top left to bottom right, down each column
Images can be placed into fixed positions - top left, top centre, top right, left centre, centre, right centre, bottom left, bottom centre, bottom right
When an image is placed, it can span between 1 to 3 columns. The height is automatically adjusted to fit the proportions, based on the span which is set.
All the actual layout work (and images flowing around the text) is done - what is required of the algorithm is the decision making only
The overall problem is to place stories in the fixed layout in an interesting way such that pictures are near to the stories associated with them, and also exhibit an interesting variation of position and span, so as to make the printed reading experience interesting and aesthetically pleasing.
This is the work required of the algorithm:
When a story with a picture is placed into the newspaper, at least
one edge of the picture must touch the story which it relates to.
We need to decide whether or not to re-order the stories so that the image density is not too biased in one area e.g. the first 4 pages have lots of pictures and the rest of the newspaper is just text.
If we choose to re-order and shuffle stories, then what is the optimal approach to resolving this placement problem?
In general, I'm not sure if this falls under the bin-packing problem - in the sense that stories can be re-ordered to minimise white-space; because we need images to be placed "near" to stories.
Any pointers to how to approach this problem, or code that facilitates a similar class of problem is appreciated.
Lets say that an ordering of stories will have those stories text appearing one after another in column order. If the story has a picture then the picture could be above/below/left/or right of the story text
Generate all possible orders of stories and pictures
Discard the order if any picture position won't fit (it can't be to the left of the leftmost column, or if placed where specified, the picture doesn't end up in one of the allowed locations).
You are left with a set of possible story/picture orders that fit the placement rules. You might then display them in miniature and have sub-editors choose the best.

Making a good XY (scatter) chart in VB6

I need to write an application in VB6 which makes a scatter plot out of a series of data points.
The current workflow:
User inputs info.
A bunch of calculations go down.
The output data is displayed in a series of 10 list boxes.
Each time the "calculate" button is clicked, 2 to 9 entries are entered into the list boxes.
One list box contains x coordinates.
One list box contains the y coordinates.
I need to:
Scan through those list boxes, and select my x's and y's.
Another list box field will change from time to time, varying between 0 and 100, and that field is what needs to differentiate which series on the eventual graph the x's and y's go into. So I will have Series 1 with six (x,y) data points, Series 26 with six data points, Series 99 with six data points, etc. Or eight data points. Or two data points. The user controls how many x's there are.
Ideally, I'll have a graph with multiple series displaying all this info.
I am not allowed to use a 3rd party solution (e.g. Excel). This all has to be contained in a VB6 application.
I'm currently trying to do this with MS Chart, as there seems to be the most documentation for that. However, this seems to focus on pie charts and other unrelated visualizations.
I'm totally open to using MS Graph but I don't know the tool and can't find good documentation.
A 2D array is, I think, a no go, since it would need to be of a constantly dynamically changing size, and that can't be done (or so I've been told). I would ideally cull through the runs, sort the data by that third series parameter, and then plug in the x's and y's, but I'm finding the commands and structure for MS Chart to be so dense that I'm just running around in very small circles.
Edit: It would probably help if you can visualize what my data looks like. (S for series, made up numbers.)
S X Y
1 0 1000000
1 2 500000
1 4 250000
1 6 100000
2 0 1000000
2 2 6500
2 4 5444
2 6 1111
I don't know MSGraph, but I'm sure there is some sort of canvas element in VB6 which you can use to easily draw dots yourself. Scatter plots are an easy graph to make on your own, so long as you don't need to calculate a line of best fit.
I would suggest looking into the canvas element and doing it by hand if you can't find a tool that does it for you.
Conclusion: MSChart and MSGraph can both go suck a lemon. I toiled and toiled and got a whole pile of nothing out of either one. I know they can do scatter plots, but I sure as heck can't make them do 'em well.
#BlackBear! After finding out that my predecessor had the same problems and just used Pset and Line to make some really impressive graphs, I did the same thing - even if it's not reproducible and generic in the future as was desired. The solution that works, albeit less functionally >> the solution with great functionality that exists only in myth.
If anyone is reading this down the line and has an actual answer about scatter plots and MSChart/Graph, I'd still love to know.

Automatic tracking algorithm

I'm trying to write a simple tracking routine to track some points on a movie.
Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background.
I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that.
A few more infos:
the spots are a few (es. 5x5) pixels in size
the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth.
the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses.
the spots don't move in a particular direction. They can move right and then left and then right again
the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points.
As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy.
Thank you
nico
Sounds like a job for Blob detection to me.
I would suggest the Pearson's product. Having a model (which could be any template image), you can measure the correlation of the template with any section of the frame.
The result is a probability factor which determine the correlation of the samples with the template one. It is especially applicable to 2D cases.
It has the advantage to be independent from the sample absolute value, since the result is dependent on the covariance related with the mean of the samples.
Once you detect an high probability, you can track the successive frames in the neightboor of the original position, and select the best correlation factor.
However, the size and the rotation of the template matter, but this is not the case as I can understand. You can customize the detection with any shape since the template image could represent any configuration.
Here is a single pass algorithm implementation , that I've used and works correctly.
This has got to be a well reasearched topic and I suspect there won't be any 100% accurate solution.
Some links which might be of use:
Learning patterns of activity using real-time tracking. A paper by two guys from MIT.
Kalman Filter. Especially the Computer Vision part.
Motion Tracker. A student project, which also has code and sample videos I believe.
Of course, this might be overkill for you, but hope it helps giving you other leads.
Simple is good. I'd start doing something like:
1) over a small rectangle, that surrounds a spot:
2) apply a weighted average of all the pixel coordinates in the area
3) call the averaged X and Y values the objects position
4) while scanning these pixels, do something to approximate the bounding box size
5) repeat next frame with a slightly enlarged bounding box so you don't clip spot that moves
The weight for the average should go to zero for pixels below some threshold. Number 4 can be as simple as tracking the min/max position of anything brighter than the same threshold.
This will of course have issues with spots that overlap or cross paths. But for some reason I keep thinking you're tracking stars with some unknown camera motion, in which case this should be fine.
I'm afraid that blob tracking is not simple, not if you want to do it well.
Start with blob detection as genpfault says.
Now you have spots on every frame and you need to link them up. If the blobs are moving independently, you can use some sort of correspondence algorithm to link them up. See for instance http://server.cs.ucf.edu/~vision/papers/01359751.pdf.
Now you may have collisions. You can use mixture of gaussians to try to separate them, give up and let the tracks cross, use any other before-and-after information to resolve the collisions (e.g. if A and B collide and A is brighter before and will be brighter after, you can keep track of A; if A and B move along predictable trajectories, you can use that also).
Or you can collaborate with a lab that does this sort of stuff all the time.

Resources