How to put my stop loss on the previous swing low before entrance? - limit

I want to put my stop loss on the low of last swing low before entrance in long position. I want this stop keep fix before next long entrance position. but it does not work. Can you please tell me how to solve this problem? Here is my code. please check this picture to understand clearly
//#version=5
strategy(title="Stop order exits", overlay=true)
// Calculate and plot the Exponential Moving Averages (EMAs)
fastEMA = ta.ema(close, 10)
slowEMA = ta.ema(close, 30)
plot(fastEMA, color=color.orange, title="Fast EMA")
plot(slowEMA, color=color.teal, linewidth=2, title="Slow EMA")
// Go long when the fast average crosses above the slow one
if ta.crossover(fastEMA, slowEMA)
strategy.entry("Enter Long", strategy.long)
strategy.exit("Exit Long", from_entry="Enter Long", stop=ta.lowest(low,5))

Related

How to detect base of pulse signal

I have time series range from 0 to 10 as in the picture. The sample data is here.
I have try moving average, standard deviation in the moving window, but it does not get me the luck to detect base of the pulse I need. I manually put black X markers in the figure to show my interesting points.
How to process this time series?
I use find_peaks() with prominence=10 and width=10 as magic numbers. This partially accepted answer because it has voodoo variable in it.
peaks, props = find_peaks(p_array, prominence=10, width=100)
plt.plot(prediction_array, color='orange', label='predicted label')
plt.scatter(peaks, pp_array[peaks], marker='x', color='red')
plt.scatter([props["left_bases"][0], props["right_bases"][0]], [0, 0], marker='x', color='red', label="width=10,prominence=100")
plt.legend(loc="upper right")

extracting many regions of interests ROIs) from thousand images

I have a large set of microscopy images and each image has several hundreds of spots (ROIs). These spots are fixed in space. I want to extract each spot from each image and save into workspace so that I can analyze them further.
I have written a code myself and it working perfectly but its too slow. It takes around 250 sec to completely read out all the spots from every image.
The core of my code looks as following:
for s=1:NumberImages
im1=imread(fn(s,1).name);
im=im1-medfilt2(im1,[15,15]);
for i=1:length(p_g_x)
GreenROI(i,s)=double(sum(sum(im(round(p_g_y(i))+(-2:2), round(p_g_x(i))+(-2:2)))));
RedROI(i,s)=double(sum(sum(im(round(p_r_y(i))+(-2:2), round(p_r_x(i))+(-2:2)))));
end
end
As you can see from the code I am extracting 5x5 regions. Length of p_g_x is between 500-700.
Thanks for your input. I used profile viewer to figure out which function exactly is taking more time. It was median filter which is taking a lot of time (~90%).
Any suggestion to fast it up will be greatly appreciated.
thanks
Mahipal
Use Matlab's profiling tools!
profile on % Starts the profiler
% Run some code now.
profile viewer % Shows you how often each function was called, and
% where most time was spent. Try to start with the slowest part.
profile off % Resets the Profiler, so you can measure again.
Pre-allocate
Preallocate the output because you know the size and this way it is much faster. (Matlab told you this already!)
GreenROI = zeros(length(p_g_x), NumberImages); % And the same for RedROI.
Use convolution
Read about Matlab's conv2 code.
for s=1:NumberImages
im=imread(fn(s,1).name);
im=im-medfilt2(im,[15,15]);
% Pre-compute the sums first. This will only be faster for large p_g_x
roi_image = conv2(im, ones(5,5));
for i=1:length(p_g_x)
GreenROI(i,s)=roi_image(round(p_g_y(i)), round(p_g_x(i))); % You might have to offset the indices by 2, because of the convolution. Check that results are the same.
RedROI(i,s)=roi_image(round(p_r_y(i)), round(p_r_x(i)));
end
end
Matlab-ize the code
Now, that you've used convolution to get an image of sums over 5x5 windows (or you could've used #Shai's accumarray, same thing), you can speed things up further by not iterating through each element in p_g_x but use it as a vector straight away.
I leave that as an exercise for the reader. (convert p_g_x and p_g_y to indices using sub2ind, as a hint).
Update
Our answers, mine included, showed how premature optimisation is a bad thing. Without knowing, I assumed that your loop would take most of the time, but when you measured it (thanks!) it turns out that is not the problem. The bottleneck is medfilt2 the median filter, which takes 90% of the time. So you should address this first. (Note, that on my computer your original code is fast enough for my taste but it is still the median filter taking up most of the time.)
Looking at what the median filter operation does might help us figure out how to make it faster. Here is an image. On the left you see the original image. In the middle the median filter and on the right there is the result.
To me the result looks awfully similar to an edge detection result. (Mathematically this is no surprise.)
I would suggest you start experimenting with various edge detections. Have a look at Canny and Sobel. Or just use conv2(image, kernel_x) where kernel_x = [1, 2, 1; 0, 0, 0; -1, -2, -1] and the same but transposed for a kernel_y. You can find various edge detection options here: edge(im, option). I tried all options from {'sobel', 'canny', 'roberts', 'prewitt'}. Except for Canny, they all take about the same time as your median filter method. Canny is 4x slower, the rest (including the original) take 7.x seconds. All of this without a GPU. imgradient was 9 seconds.
So from this I would say that you can't get any faster. If you have a GPU and it works with Matlab, you could speed it up. Load your image as gpuArrays. There is an example on the medfilt2 documentation. You can still do minor speed ups but they can only amount to a 10% speed increase, so are hardly worthwile.
A few things you should do
Pre-allocate as suggested by Didac Perez.
Use profiler to see what exactly takes long in your code, is it the median filter? is it the indexing?
Assuming all images are of the same size, you can use accumarray and a fixed mask subs to quickly sum the values:
subs_g = zeros( h, w ); %// allocate mask for green
subs_r = zeros( h, w );
subs_g( sub2ind( [h w], round(p_g_y), round(p_g_x) ) = 1:numel(p_g_x); %//index each region
subs_g = conv2( subs_g, ones(5), 'same' );
subs_r( sub2ind( [h w], round(p_r_y), round(p_r_x) ) = 1:numel(p_r_x); %//index each region
subs_r = conv2( subs_r, ones(5), 'same' );
sel_g = subs_g > 0;
sel_r = subs_r > 0;
subs_g = subs_g(sel_g);
subs_r = subs_r(sel_r);
once these masks are fixed, you can process all images
%// pre-allocation goes here - I'll leave it to you
for s=1:NumberImages
im1=imread(fn(s,1).name);
im=double( im1-medfilt2(im1,[15,15]) );
accumarray( subs_g, im( sel_g ) ); % summing all the green ROIs
accumarray( subs_r, im( sel_r ) ); % summing all the green ROIs
end
First, preallocate your GreenROI and RedROI structures since you previously know the final size. Now, you are resizing them again and again in each iteration.
Secondly, I do recommend you to use "tic" and "toc" to investigate where is the problem, it will give you useful timings.
Vectorized code that operates on each image -
%// Pre-compute green and red indices to be used across all the images
r1 = round(bsxfun(#plus,permute(p_g_y,[3 2 1]),[-2:2]'));
c1 = round(bsxfun(#plus,permute(p_g_x,[3 2 1]),[-2:2]));
green_ind = reshape(bsxfun(#plus,(c1-1)*size(im,1),r1),[],numel(p_g_x));
r2 = round(bsxfun(#plus,permute(p_r_y,[3 2 1]),[-2:2]'));
c2 = round(bsxfun(#plus,permute(p_r_x,[3 2 1]),[-2:2]));
red_ind = reshape(bsxfun(#plus,(c2-1)*size(im,1),r2),[],numel(p_g_x));
for s=1:NumberImages
im1=imread(fn(s,1).name);
im=double(im1-medfilt2(im1,[15,15]));
GreenROI=sum(im(green_ind));
RedROI =sum(im(red_ind));
end

Movement algorithm for game

I'm currently developing a script for a map in COD4. I think that the language is so simple that I'm tagging this as language-agnostic since the problem is in the algorithm for this situation.
There is a room which is 960 units wide. And inside it there's an object in the middle, which we'll count as the axis. The ball is supposed to move to a random position each time it is hit, but should not traverse further than the walls. Here's a diagram:
The API of the game only allows the moving of objects relative to its position, as far as I know, so here's code that I came up with. The problem is that after the second call to head_move() it begins to produce unexpected results and this is crashing my head. Could somebody help me out?
movementThink():
while (1)
{
self waittill ("trigger", player); //Wait till player hits the object
head_origin thread head_move();
}
head_move()
{
/* level.prevx is a global variable which I use to store
the distance traveled in the previous shot. Defaults to 0 */
/*This works in the first and second hit, but then it begins to show
incorrect max and min values*/
x_min = (0-480) + level.prevx;
x_max = x_min + 960;
x_units = RandomIntRange( x_min, x_max ); //Create a random integrer
log2screen("MIN: " + x_min + " and MAX: " + x_max + " and MOVED " + x_units);
log2screen("Moved " + x_units);
//Movement function, first parameter is the distance to be traveled, and the second one is the speed
self movex (x_units , level.movespeed);
level.prevx = x_units;
}
EDIT: Just to clarify. When the user shoots the ball, its position changes to a certain value. Now, if he hits it again, the min and max values of the random int generator should change to prevent the ball from moving outside the walls. Example:
Level starts. The ball is in the middle of the room. The min and max ranges are -480 and 480 respectively
The user hits the ball and its moved -200 units (200 units to the left).
Now, the min and max range should be -280 and 680.
I hope this is clear enough.
EDIT 2: Edited the sign as FlipScript suggested. Here's the output from the log2screen functions, what is actually happening:
MIN: -480 and MAX 480. MOVED 67
MIN: -413 and MAX 547. MOVED 236
MIN: -244 and MAX 716. MOVED 461
Just a sample case. Something is backwards I believe, these aren't the right calculations to do.
Your code works only when self.prevx contains your displacement from the starting position, i.e. your absolute position. However, what you are storing is your displacement from your current position. It works the first two times because that displacement happens to be the same as your absolute position, but once you move again, you lose all track of where you are.
What you should do instead is get rid of min and max, and start by calculating a random absolute position within the bounds. Then use your previously stored absolute position to calculate the relative movement needed to get you there, and store the new absolute position.
head_move()
{
new_x = RandomIntRange( -480, 480 ); //create a random location
delta_x = new_x - level.prev; //determine relative movement needed to get there
self movex (delta_x , level.movespeed); //move to new position
level.prevx = new_x; //store new position
}
I dont know much about the programming environment, but this line
head_origin thread head_move();
is suspicious for troublemaking. What are these tokens? Anything that says thread could be duplicating data structures and throwing your local variables astray.
And why do x_min and x_max change? Where's y_min and y_max?
Something doesn't look right in this line here:
x_max = x_min - 960;
Is the MAX really the MIN MINUS 960? From your description, it sounds like that should be a '+' sign.
EDIT:
In your additional comments, the minus sign wouldn't allow these truths:
Level starts....The min and max ranges are -480 and 480 respectively
...
Now, the min and max range should be -280 and 680.
Comments 1 and 3 point to that sign needing to be '+' sign.

Measuring the average thickness of traces in an image

Here's the problem: I have a number of binary images composed by traces of different thickness. Below there are two images to illustrate the problem:
First Image - size: 711 x 643 px
Second Image - size: 930 x 951 px
What I need is to measure the average thickness (in pixels) of the traces in the images. In fact, the average thickness of traces in an image is a somewhat subjective measure. So, what I need is a measure that have some correlation with the radius of the trace, as indicated in the figure below:
Notes
Since the measure doesn't need to be very precise, I am willing to trade precision for speed. In other words, speed is an important factor to the solution of this problem.
There might be intersections in the traces.
The trace thickness might not be constant, but an average measure is OK (even the maximum trace thickness is acceptable).
The trace will always be much longer than it is wide.
I'd suggest this algorithm:
Apply a distance transformation to the image, so that all background pixels are set to 0, all foreground pixels are set to the distance from the background
Find the local maxima in the distance transformed image. These are points in the middle of the lines. Put their pixel values (i.e. distances from the background) image into a list
Calculate the median or average of that list
I was impressed by #nikie's answer, and gave it a try ...
I simplified the algorithm for just getting the maximum value, not the mean, so evading the local maxima detection algorithm. I think this is enough if the stroke is well-behaved (although for self intersecting lines it may not be accurate).
The program in Mathematica is:
m = Import["http://imgur.com/3Zs7m.png"] (* Get image from web*)
s = Abs[ImageData[m] - 1]; (* Invert colors to detect background *)
k = DistanceTransform[Image[s]] (* White Pxs converted to distance to black*)
k // ImageAdjust (* Show the image *)
Max[ImageData[k]] (* Get the max stroke width *)
The generated result is
The numerical value (28.46 px X 2) fits pretty well my measurement of 56 px (Although your value is 100px :* )
Edit - Implemented the full algorithm
Well ... sort of ... instead of searching the local maxima, finding the fixed point of the distance transformation. Almost, but not quite completely unlike the same thing :)
m = Import["http://imgur.com/3Zs7m.png"]; (*Get image from web*)
s = Abs[ImageData[m] - 1]; (*Invert colors to detect background*)
k = DistanceTransform[Image[s]]; (*White Pxs converted to distance to black*)
Print["Distance to Background*"]
k // ImageAdjust (*Show the image*)
Print["Local Maxima"]
weights =
Binarize[FixedPoint[ImageAdjust#DistanceTransform[Image[#], .4] &,s]]
Print["Stroke Width =",
2 Mean[Select[Flatten[ImageData[k]] Flatten[ImageData[weights]], # != 0 &]]]
As you may see, the result is very similar to the previous one, obtained with the simplified algorithm.
From Here. A simple method!
3.1 Estimating Pen Width
The pen thickness may be readily estimated from the area A and perimeter length L of the foreground
T = A/(L/2)
In essence, we have reshaped the foreground into a rectangle and measured the length of the longest side. Stronger modelling of the pen, for instance, as a disc yielding circular ends, might allow greater precision, but rasterisation error would compromise the signicance.
While precision is not a major issue, we do need to consider bias and singularities.
We should therefore calculate area A and perimeter length L using functions which take into account "roundedness".
In MATLAB
A = bwarea(.)
L = bwarea(bwperim(.; 8))
Since I don't have MATLAB at hand, I made a small program in Mathematica:
m = Binarize[Import["http://imgur.com/3Zs7m.png"]] (* Get Image *)
k = Binarize[MorphologicalPerimeter[m]] (* Get Perimeter *)
p = N[2 Count[ImageData[m], Except[1], 2]/
Count[ImageData[k], Except[0], 2]] (* Calculate *)
The output is 36 Px ...
Perimeter image follows
HTH!
Its been a 3 years since the question was asked :)
following the procedure of #nikie, here is a matlab implementation of the stroke width.
clc;
clear;
close all;
I = imread('3Zs7m.png');
X = im2bw(I,0.8);
subplottight(2,2,1);
imshow(X);
Dist=bwdist(X);
subplottight(2,2,2);
imshow(Dist,[]);
RegionMax=imregionalmax(Dist);
[x, y] = find(RegionMax ~= 0);
subplottight(2,2,3);
imshow(RegionMax);
List(1:size(x))=0;
for i = 1:size(x)
List(i)=Dist(x(i),y(i));
end
fprintf('Stroke Width = %u \n',mean(List));
Assuming that the trace has constant thickness, is much longer than it is wide, is not too strongly curved and has no intersections / crossings, I suggest an edge detection algorithm which also determines the direction of the edge, then a rise/fall detector with some trigonometry and a minimization algorithm. This gives you the minimal thickness across a relatively straight part of the curve.
I guess the error to be up to 25%.
First use an edge detector that gives us the information where an edge is and which direction (in 45° or PI/4 steps) it has. This is done by filtering with 4 different 3x3 matrices (Example).
Usually I'd say it's enough to scan the image horizontally, though you could also scan vertically or diagonally.
Assuming line-by-line (horizontal) scanning, once we find an edge, we check if it's a rise (going from background to trace color) or a fall (to background). If the edge's direction is at a right angle to the direction of scanning, skip it.
If you found one rise and one fall with the correct directions and without any disturbance in between, measure the distance from the rise to the fall. If the direction is diagonal, multiply by squareroot of 2. Store this measure together with the coordinate data.
The algorithm must then search along an edge (can't find a web resource on that right now) for neighboring (by their coordinates) measurements. If there is a local minimum with a padding of maybe 4 to 5 size units to each side (a value to play with - larger: less information, smaller: more noise), this measure qualifies as a candidate. This is to ensure that the ends of the trail or a section bent too much are not taken into account.
The minimum of that would be the measurement. Plausibility check: If the trace is not too tangled, there should be a lot of values in that area.
Please comment if there are more questions. :-)
Here is an answer that works in any computer language without the need of special functions...
Basic idea: Try to fit a circle into the black areas of the image. If you can, try with a bigger circle.
Algorithm:
set image background = 0 and trace = 1
initialize array result[]
set minimalExpectedWidth
set w = minimalExpectedWidth
loop
set counter = 0
create a matrix of zeros size w x w
within a circle of diameter w in that matrix, put ones
calculate area of the circle (= PI * w)
loop through all pixels of the image
optimization: if current pixel is of background color -> continue loop
multiply the matrix with the image at each pixel (e.g. filtering the image with that matrix)
(you can do this using the current x and y position and a double for loop from 0 to w)
take the sum of the result of each multiplication
if the sum equals the calculated circle's area, increment counter by one
store in result[w - minimalExpectedWidth]
increment w by one
optimization: include algorithm from further down here
while counter is greater zero
Now the result array contains the number of matches for each tested width.
Graph it to have a look at it.
For a width of one this will be equal to the number of pixels of trace color. For a greater width value less circle areas will fit into the trace. The result array will thus steadily decrease until there is a sudden drop. This is because the filter matrix with the circular area of that width now only fits into intersections.
Right before the drop is the width of your trace. If the width is not constant, the drop will not be that sudden.
I don't have MATLAB here for testing and don't know for sure about a function to detect this sudden drop, but we do know that the decrease is continuous, so I'd take the maximum of the second derivative of the (zero-based) result array like this
Algorithm:
set maximum = 0
set widthFound = 0
set minimalExpectedWidth as above
set prevvalue = result[0]
set index = 1
set prevFirstDerivative = result[1] - prevvalue
loop until index is greater result length
firstDerivative = result[index] - prevvalue
set secondDerivative = firstDerivative - prevFirstDerivative
if secondDerivative > maximum or secondDerivative < maximum * -1
maximum = secondDerivative
widthFound = index + minimalExpectedWidth
prevFirstDerivative = firstDerivative
prevvalue = result[index]
increment index by one
return widthFound
Now widthFound is the trace width for which (in relation to width + 1) many more matches were found.
I know that this is in part covered in some of the other answers, but my description is pretty much straightforward and you don't have to have learned image processing to do it.
I have interesting solution:
Do edge detection, for edge pixels extraction.
Do physical simulation - consider edge pixels as positively charged particles.
Now put some number of free positively charged particles in the stroke area.
Calculate electrical force equations for determining movement of these free particles.
Simulate particles movement for some time until particles reach position equilibrium.
(As they will repel from both stoke edges after some time they will stay in the middle line of stoke)
Now stroke thickness/2 would be average distance from edge particle to nearest free particle.

Determine Framerate Based On Delay

On every frame of my application, I can call timeGetTime() to retrieve the current elapsed milliseconds, and subtract the value of timeGetTime() from the previous frame to get the time between the two frames. However, to get the frame rate of the application, I have to use this formula: fps=1000/delay(ms). So for instance if the delay was 16 milliseconds, then 1000/16=62.5 (stored in memory as 62). Then let's say the delay became 17 milliseconds, then 1000/17=58, and so on:
1000/10=100
1000/11=90
1000/12=83
1000/13=76
1000/14=71
1000/15=66
1000/16=62
1000/17=58
1000/18=55
1000/19=52
1000/20=50
As you can see for consecutive instances for the delay, there are pretty big gaps in the frame rates. So how do programs like FRAPS determine the frame rate of applications that are between these values (eg 51,53,54,56,57,etc)?
Why would you do this on every frame? You'll find if you do it on every tenth frame, and then divide that value by 10, you'll easily get frame rates within the gaps you're seeing. You'll also probably find your frame rates are higher since you're doing less admin work within the loop :-)
In other words, something like (pseudo code):
chkpnt = 10
cntr = chkpnt
baseTime = now()
do lots of times:
display next frame
cntr--
if cntr == 0:
cntr = chkpnt
newTime = now()
display "framerate = " (newTime - baseTime) / chkpnt
baseTime = newTime
In addition to #Marko's suggestion to use a better timer, the key trick for a smoothly varying and better approximate evaluation of the frame rate is to use a moving average -- don't consider only the very latest delay you've observed, consider the average of (say) the last five. You can compute the latter as a floating-point number, to get more possible values for the frame rate (which you can still round to the nearest integer of course).
For minimal computation, consider a "fifo queue" of the last 5 delays (pseudocode)...:
array = [16, 16, 16, 16, 16] # initial estimate
totdelay = 80
while not Done:
newest = latestDelay()
oldest = array.pop(0)
array.append(newest)
totdelay += (newest - oldest)
estimatedFramerate = 200 / totdelay
...
Not sure, but maybe you need better (high resolution) timer. Check QueryPerformanceTimer.
Instead of a moving average (as #Alex suggests) I suggest a Low-Pass Filter. It's easier to calculate, and can be tweaked to have an arbitrary amount of value smoothing with no change to performance or memory usage. In short (demonstrated in JavaScript):
var smoothing = 10; // The larger this value, the more smoothing
var fps = 30; // some likely starting value
var lastUpdate = new Date;
function onFrameUpdate(){
var now = new Date;
var frameTime = now - lastUpdate;
var frameFPS = 1/frameTime;
// Here's the magic
fps += (frameFPS - fps) / smoothing;
lastUpdate = now;
}
For a pretty demo of this functionality, see my live example here:
http://phrogz.net/JS/framerate-independent-low-pass-filter.html

Resources