Finding last wave in mql4/5 - algorithm

I was wondering if there's an efficient and easy way to determine waves in MQL4, just like zigzag indicator does it.
I was asked to help automate indicator, for that I need to determine 'waves', essentially max and min of a graph over some period of time (which is vague and all relative).
I don't have a clear image of how I want an indicator to work, but it would be something like that:
Find the last wave, i.e. where the direction of price last changed (neglecting the noise), and then for example reflect it with a trend line.
Is it possible to use zigzag structure to find that point, where direction changed. (Possibly not the only one, might need to find more that just the last point, but the preceding one. So i will want to adopt the algorithm)

I know it's a while since you asked this question and you probably already have an answer, but if not...
I dislike Zigzag and have not found a way to do what I want to do with it, so I will the last part of your questions with no, and believe me I tried.
The way I prefer it is to find bars that conform to the classic definition of fractals/swing points (i.e. a high with two lower highs on either side, or a low with two higher lows on either side), then try to make up for the shortcomings. E.g. Often there will be two high fractals/swings/waves in a row without an intermediate low fractal/swing/wave. So I add the best intermediate low point as a wave, or remove one of the highs (E.g. if the first one wasn't as subjectively significant). Some of the swing points that are identified are 'noisy', to use your term, and not ones that a human trader would have picked. So these need to be dealt with and so on. If you go down this route it is a long one, computers make many mistakes identifying appropriate swing points, so unfortunately not what I would call easy, but it is accurate, and how many easy indicators are there that actually make money over the long run?

Related

Bejeweled board generation

I've recently stumbled upon this question:
How would you generate a board for Bejeweled game to ensure that on the start there are no jewels that would collapse right away and that theres at least one possible move
I've been thinking about doing it in random, letting them fall if they wish before we actually display the board and say that the probability of having no moves to make at all is very low so that we shouldn't worry about it.
Is there a better approach?
One greedy approach would be during generating the board, every time you try to add a random jewel, just check if two previous ones horizontally and vertically are not the same to prevent the first situation (keep in mind the border conditions).
To ensure you have N number of matches, after you generate you can get a random point and update jewels either to the sides or top-bottom to make sure there are possible matches while still making sure the first situation won't happen.

Matlab - distinguish overlapping low contrast objects in a RGB or Grayscale Image

I have a big problem detecting objects within an image - I know this topic was already highly discusses in many forums, but I spend the last 4 days searching for an answer and was not able.
In fact: I have a picture from a branch (http://cl.ly/image/343Y193b2m1c). My goal is to count every single needle in this picture. So I have to face several problems:
Separate the branch with its needles from the background (which in this case is no problem).
Select the borders of the needles. This is a huge problem; I tried different ways including all edges() functions but the problem is always the same - the borders around the needles are not closed and - which leads to the last problem:
Needles are overlapping! This leads in "squares between the needles" which are, if I use imfill() or equal formula, filled in instead of the needles. And: the places where the needles are concentrated (many needles at one place) are nearly impossible to distinguish.
I tried watershed, I tried to enhance the contrast, Kmeans clustering, I tried imerose, imdilate and related functions with subsequent edge detection. I tried as well to filter and smooth the picture a bit in order to "unsharp" the needles a bit so that not every small change in color is recognized as a border (which is another problem).
I am relatively new to matlab, so I dont know what I have to look for. I tried to follow the MatLab tutorial used for Nuclei detection - but with this I just can get all the green objects (all needles at once).
I hope this questions did not came up before - if yes, I apologize deeply for the double post. If anybody has an idea what to do or what methods to use, it would be awesome and would safe this really bad beginning of the week.
Thank you very much in advance,
Phillip
Distinguishing overlapping objects is very, very hard, particularly if you do not know how many objects you have to distinguish. Your brain is much better at distinguishing overlapping objects than any segmentation algorithm I'm aware of, since it is able to integrate a lot of information that is difficult to encode. Therefore: If you're not able to distinguish some of the features yourself, forget about doing it via code.
Having said that, there may be a way for you to be able to get an approximate count of the needles: If you can segment the image pixels into two classes: "needle" versus "not needle", and you know how much area in your picture is covered by a needle (it may help to include a ruler when you take the picture), you can then divide number of "needle"-pixels by the number of pixels covered by a single needle to estimate the total number of needles in the image. This will somewhat underestimate the needle count due to overlaps, and it will underestimate more the denser the needles are (due to more overlaps), but it should allow you to compare automatically between branches with lots of needles and branches with few needles, as well as to identify changes in time, should that be one of your goals.
I agree with #Jonas = you got yourself one HUGE problem.
Let me make a few suggestions.
First, along #Jonas' direction, instead of getting an accurate count, another way of getting a rough estimate is by counting the tips of the needles. Obviously, not all the tips are clearly visible. But, if you can get a clear mask of the branch it might be relatively easy to identify the tips of the needles using some of the morphological operations you mentioned yourself.
Second, is there any way you can get more information? For example, if you could have depth information it might help a little in distinguishing the needles from one another (it will not completely solve the task but it may help). You may get depth information from stereo - that is, taking two pictures of the branch while moving the camera a bit. If you have a Kinect device at your disposal (or some other range-camera) you can get a depth map directly...

Techniques to evaluate the "twistiness" of a road in Google Maps?

As per the title. I want to, given a Google maps URL, generate a twistiness rating based on how windy the roads are. Are there any techniques available I can look into?
What do I mean by twistiness? Well I'm not sure exactly. I suppose it's characterized by a high turn -to-distance ratio, as well as high angle-change-per-turn number. I'd also say that elevation change of a road comes in to it as well.
I think that once you know exactly what you want to measure, the implementation is quite straightforward.
I can think of several measurements:
the ratio of the road length to the distance between start and end (this would make a long single curve "twisty", so it is most likely not the complete answer)
the number of inflection points per unit length (this would make an almost straight road with a lot of little swaying "twisty", so it is most likely not the complete answer)
These two could be combined by multiplication, so that you would have:
road-length * inflection-points
--------------------------------------
start-end-distance * road-length
You can see that this can be shortened to "inflection-points per start-end-distance", which does seem like a good indicator for "twistiness" to me.
As for taking elevation into account, I think that making the whole calculation in three dimensions is enough for a first attempt.
You might want to handle left-right inflections separately from up-down inflections, though, in order to make it possible to scale the elevation inflections by some factor.
Try http://www.hardingconsultants.co.nz/transportationconference2007/images/Presentations/Technical%20Conference/L1%20Megan%20Fowler%20Canterbury%20University.pdf as a starting point.
I'd assume that you'd have to somehow capture the road centreline from Google Maps as a vectorised dataset & analyse using GIS software to do what you describe. Maybe do a screen grab then a raster-to-vector conversion to start with.
Cumulative turn angle per Km is a commonly-used measure in road assessment. Vertex density is also useful. Note that these measures depend upon an assumption that vertices have been placed at some form of equal density along the line length whilst they were captured, rather than being manually placed. Running a GIS tool such as a "bendsimplify" algorithm on the line should solve this. I have written scripts in Python for ArcGIS 10 to define these measures if anyone wants them.
Sinuosity is sometimes used for measuring bends in rivers - see the help pages for Hawths Tools for ArcGIS for a good description. It could be misleading for roads that have major
changes in course along their length though.

Robot exploration algorithm

I'm trying to devise an algorithm for a robot trying to find the flag(positioned at unknown location), which is located in a world containing obstacles. Robot's mission is to capture the flag and bring it to his home base(which represents his starting position). Robot, at each step, sees only a limited neighbourhood (he does not know how the world looks in advance), but he has an unlimited memory to store already visited cells.
I'm looking for any suggestions about how to do this in an efficient manner. Especially the first part; namely getting to the flag.
A simple Breadth First Search/Depth First Search will work, albeit slowly. Be sure to prevent the bot from checking paths that have the same square multiple times, as this will cause these algorithms to run much longer in standard cases, and indefinitely in the case of the flag being unable to be reached.
A* is the more elegant approach, especially if you know the location of the flag relative to yourself. Wikipedia, as per usual, does a decent job with explaining it. The classic heuristic to use is the manning distance (number of moves assuming no obstacles) to the destination.
These algorithms are useful for the return trip - not so much the "finding the flag" part.
Edit:
These approaches involve creating objects that represents squares on your map, and creating "paths" or series of square to hit (or steps to take). Once you build a framework for representing your square, the problem of what kind of search to use becomes a much less daunting task.
This class will need to be able to get a list of adjacent squares and know if it is traversable.
Considering that you don't have all information, try just treating unexplored tiles as traversable, and recomputing if you find they aren't.
Edit:
As for seaching an unknown area for an unknown object...
You can use something like Pledge's algorithm until you've found the boundaries of your space, recording all information as you go. Then go have a look at all unseen squares using your favorite drift/pathfinding algorithm. If, at any point long the way, you see the flag, stop what you're doing and use your favorite pathfinding algorithm to go home.
Part of it will be pathfinding, for example with the A* algorithm.
Part of it will be exploring. Any cell with an unknown neighbour is worth exploring. The best cells to explore are those closest to the robot and with the largest unexplored neighbourhood.
If the robot sees through walls some exploration candidates might be inaccessible and exploration might be required even if the flag is already visible.
It may be worthwhile to reevaluate the current target every time a new cell is revealed. As long as this is only done when new cells are revealed, progress will always be made.
With a simple DFS search at least you will find the flag:)
Well, there are two parts to this.
1) Searching for the Flag
2) Returning Home
For the searching part, I would circle the home point moving outward every time I made a complete loop. This way, you can search every square and idtentify if it is a clear spot, an obstacle, map boundary or the flag. This way, you can create a map of your environment.
Once the Flag is found, you could either go back the same way, or find a more direct route. If it is more direct route, then you would have to use the map which you have created to find a direct route.
What you want is to find all minimal-spanning-tree in the viewport of the robot and then let the robot game which mst he wants to travel.
If you met an obstacle, you can go around to determine its precise dimensions, and after measuring it return to the previous course.
With no obstacles in the range of sight you can try to just head in the direction of the nearest unchecked area.
It maybe doesn't seem the fastest way but, I think, it is the good point to start.
I think the approach would be to construct the graph as the robot travels. There will be a function that will return to the robot the particular state of a grid. This is needed since the robot will not know in advance the state of the grid.
You can apply heuristics in the search so the probability of reaching the flag is increased.
As many have mentioned, A* is good for global planning if you know where you are and where your goal is. But if you don't have this global knowledge, there is a class of algorithms call "bug" algorithms that you should look into.
As for exploration, if you want to find the flag the fastest, depending on how much of the local neighborhood your bot can see, you should try to not have this neighborhood overlap. For example if your bot can see one cell around it in every direction, you should explore every third column. (columns 1, 4, 7, etc.). But if the bot can only see the cell it is currently occupying, then the most optimal thing you can do is to not go back over what you already visited.

Smoothing values over time: moving average or something better?

I'm coding something at the moment where I'm taking a bunch of values over time from a hardware compass. This compass is very accurate and updates very often, with the result that if it jiggles slightly, I end up with the odd value that's wildly inconsistent with its neighbours. I want to smooth those values out.
Having done some reading around, it would appear that what I want is a high-pass filter, a low-pass filter or a moving average. Moving average I can get down with, just keep a history of the last 5 values or whatever, and use the average of those values downstream in my code where I was once just using the most recent value.
That should, I think, smooth out those jiggles nicely, but it strikes me that it's probably quite inefficient, and this is probably one of those Known Problems to Proper Programmers to which there's a really neat Clever Math solution.
I am, however, one of those awful self-taught programmers without a shred of formal education in anything even vaguely related to CompSci or Math. Reading around a bit suggests that this may be a high or low pass filter, but I can't find anything that explains in terms comprehensible to a hack like me what the effect of these algorithms would be on an array of values, let alone how the math works. The answer given here, for instance, technically does answer my question, but only in terms comprehensible to those who would probably already know how to solve the problem.
It would be a very lovely and clever person indeed who could explain the sort of problem this is, and how the solutions work, in terms understandable to an Arts graduate.
If you are trying to remove the occasional odd value, a low-pass filter is the best of the three options that you have identified. Low-pass filters allow low-speed changes such as the ones caused by rotating a compass by hand, while rejecting high-speed changes such as the ones caused by bumps on the road, for example.
A moving average will probably not be sufficient, since the effects of a single "blip" in your data will affect several subsequent values, depending on the size of your moving average window.
If the odd values are easily detected, you may even be better off with a glitch-removal algorithm that completely ignores them:
if (abs(thisValue - averageOfLast10Values) > someThreshold)
{
thisValue = averageOfLast10Values;
}
Here is a guick graph to illustrate:
The first graph is the input signal, with one unpleasant glitch. The second graph shows the effect of a 10-sample moving average. The final graph is a combination of the 10-sample average and the simple glitch detection algorithm shown above. When the glitch is detected, the 10-sample average is used instead of the actual value.
If your moving average has to be long in order to achieve the required smoothing, and you don't really need any particular shape of kernel, then you're better off if you use an exponentially decaying moving average:
a(i+1) = tiny*data(i+1) + (1.0-tiny)*a(i)
where you choose tiny to be an appropriate constant (e.g. if you choose tiny = 1- 1/N, it will have the same amount of averaging as a window of size N, but distributed differently over older points).
Anyway, since the next value of the moving average depends only on the previous one and your data, you don't have to keep a queue or anything. And you can think of this as doing something like, "Well, I've got a new point, but I don't really trust it, so I'm going to keep 80% of my old estimate of the measurement, and only trust this new data point 20%". That's pretty much the same as saying, "Well, I only trust this new point 20%, and I'll use 4 other points that I trust the same amount", except that instead of explicitly taking the 4 other points, you're assuming that the averaging you did last time was sensible so you can use your previous work.
Moving average I can get down with ...
but it strikes me that it's probably
quite inefficient.
There's really no reason a moving average should be inefficient. You keep the number of data points you want in some buffer (like a circular queue). On each new data point, you pop the oldest value and subtract it from a sum, and push the newest and add it to the sum. So every new data point really only entails a pop/push, an addition and a subtraction. Your moving average is always this shifting sum divided by the number of values in your buffer.
It gets a little trickier if you're receiving data concurrently from multiple threads, but since your data is coming from a hardware device that seems highly doubtful to me.
Oh and also: awful self-taught programmers unite! ;)
An exponentially decaying moving average can be calculated "by hand" with only the trend if you use the proper values. See http://www.fourmilab.ch/hackdiet/e4/ for an idea on how to do this quickly with a pen and paper if you are looking for “exponentially smoothed moving average with 10% smoothing”. But since you have a computer, you probably want to be doing binary shifting as opposed to decimal shifting ;)
This way, all you need is a variable for your current value and one for the average. The next average can then be calculated from that.
there's a technique called a range gate that works well with low-occurrence spurious samples. assuming the use of one of the filter techniques mentioned above (moving average, exponential), once you have "sufficient" history (one Time Constant) you can test the new, incoming data sample for reasonableness, before it is added to the computation.
some knowledge of the maximum reasonable rate-of-change of the signal is required. the raw sample is compared to the most recent smoothed value, and if the absolute value of that difference is greater than the allowed range, that sample is thrown out (or replaced with some heuristic, eg. a prediction based on slope; differential or the "trend" prediction value from double exponential smoothing)

Resources