I'm considering various grid trading strategies and I'm wondering what's the optimal action (if any) to take when the price goes out of grid bounds.
In the image below, the price is within a size 4 grid, with 2 buys below and 2 sells above.
Say the price goes above the highest buy or below the lowest sell. If that happens, one can:
Do nothing, and hope that the price goes back into grid bounds.
Set a stop loss or take profit and cancel the grid when they trigger.
Lowering or raising the grid. Price can thus be closer to the grid or inside it.
I can't think of any other ways right now. Any suggestions are welcome.
I designed a new trading strategy called bollmaker in order to overcome the drawbacks of grid trading -- price range issue.
check this out, it's an open source strategy:
https://medium.com/bbgo/bbgo-bollmaker-trading-strategy-3a4e80b482e3
Related
It's been weeks since I've been trying to solve this problem, I tried various formulas for this (ArrayFormula, ABS, SUMPRODUCT, using a negative sign on the cells), but I can't seem to get it right.
The correct way will always be manually subtracting the cells one by one but this will cause too much delay or problem if we have more than 100 rows on the sheets.
=if(D14<(E3-E4-E5-E6-E7-E8-E9-E10-E11-E12-E13),D14,E3-E4-E5-E6-E7-E8-E9-E10-E11-E12-E13)
Here's the link to the sheet: https://docs.google.com/spreadsheets/d/1fAPQHKupKglBAJpoxrcVqWP343m0P5QOj8zp1FvasEA/edit?usp=sharing
The overall idea for this is that the Total Purchased should be compared to the total sold. The 2201 value on the total sold is retrieved from another transactions sheet and it just totals every sold item, and then starting from E4 (170 in cell value) onwards, it decreases since we just need to know the number of sold items from that certain row.
Thank you very much for taking the time to read this. I'm looking forward to getting help from this as this stresses me for weeks now.
use cumulative function
=arrayformula(mmult(1*(transpose(row(D4:D))<=row(D4:D)),if(D4:D="",0,D4:D)))
and include in your formula in E4 as follows
=arrayformula(if(D4:D<($E$3-(mmult(1*(transpose(row(D4:D))<=row(D4:D)),if(D4:D="",0,D4:D)))),D4:D))
I am in the process of designing an algorithm that will calculate regions in a candlestick chart where strong areas of support exist. An "area of support" in this case is defined as an area in the chart where the price of a stock rises by a large amount in a short period of time. (Please see the diagram below, the blue dots represent these strong areas of support)
The data I am working with is a list of over 6000 TOHLC (timestamp, open price, high price, low price, close price) values. For example, the first entry in this list of data is:
[1555286400, 83.7, 84.63, 83.7, 84.27]
The way I have structured the algorithm to work is as follows:
1.) The list of 6000+ TOHLC values are split into sub-lists of 30 TOHLC values (30 is a number that I arbitrarily chose). The lowest low price (LLP) is then obtained from each of these sub-lists. The purpose behind using this method is to find areas in the chart where prices dip.
2.) The next step is to determine how high the price rose from each of these lows. For this, I take the next 30 candlestick values from the low and determine what the highest high price (HHP) is. Then, if HHP / LLP >= 1.03, the low price is accepted, otherwise it is discarded. Again, 1.03 is a value that I arbitrarily chose, by analysing the stock chart manually and determining how much the price rose on average from these lows.
The blue dots in the chart above represent the accepted areas of support by the algorithm. It appears to be working well, in terms of that I am trying to achieve.
So the question I have is: does anyone have any improvements they can suggest for this algorithm, or point out any faults in it?
Thanks!
I may have understood wrong, however, from your explanation it seems like you are doing your calculation in separate 30-ish sub lists and then combining them together.
So, what if the LLP is the 30th element of sublist N and HHP is 1st element of sublist N+1 ? If you have taken that into account, then it's fine.
If you haven't taken that into account, I would suggest doing a moving-window type of approach in reading those data. So, you would start from 0th element of 6000+ TOHLC and start with a window size of 30 and slide it 1 by 1. This way, you won't miss any values.
Some of the selected blue dots have higher dip than others. Why is that? I would separate them into another classifier. If you will store them into an object, store the dip rate as well.
Floating point numbers are not suggested in finance. If possible, I'd use a different approach and perhaps classifier, solely using integers. It may not bother you or your project as of now, but surely, it will begin to create false results when the numbers add up in the future.
I need to calculate the nearest dataPoint in a time series chart from a specific point in a chart
I obviously cannot use d=sqrt(x*x+y*y) as my x axis is in time series, hence it wont make sense to have an equation where I am adding distance and time together (x,y need to have same units). Moreover visually it may seem right, but it still depends upon the scale of the x axis.
So what best logic can I use to find the nearest point?
I can think of using a quadratic form of x (i.e. time) so as that my final function can ne f(x*x,y), but then it is just a subjective equation.
Does anyone have a better and more logical approach to this. If there is an intuitive logical approach I will love it. And if there is a complicated model I would still like to know about it and explore it.
Thanks
EDIT
TO give background: I am polling people to predict where the stock price will be in April(they have to mention exact date when the expect price to be there) ... How do I measure their performance?
One intuitive way is by calculating the average absolute change per day.
i.e.
Sum of Absolute changes every day from previous day / Total number of days in series.
Thereafter I can translate each day in terms of prices i.e. the average price change per day.
Thus if average absolute change per day is lets say 2, then a price that is 10 days away can be said to be 20 price points away.
Thereafter I can calculate the distance based on sqrt(x*x+y*y) formula.
This can be fine tuned by using a bell curve (std dev and mean) rather than just mean of absolute change per day. But then it will make solution more ocmplicated.
I am trying to figure out the best way to setup custom price calculations. Basically my product will have 5 options for example:
Width (mm):
Bends (Qty):
Length: Qty:
Straps: Yes/No
I can do these as custom options however its quite a complex calculation and needs to be carried out in a specific order. An example would be
The width needs to be rounded up to the next 50
Discount to be applied
Add cost per bends (Bends worked out by £0.06/m * length)
Multiple by Qty
Add 10% for straps
I know how to do the calculation but just not sure the best way if integrating this with Magento.
Any ideas would be much appreciated.
What algorithm would be good for this?
I have a list of tickets and there is an assigned priority from 1-5, 1 being the lowest and 5 the most important. Arithmetic mean wouldn't do me any good because if a ticket of high priority cancels out a lower one. Mode wouldn't have enough sampling. Median same problem. What would you guys suggest?
Edit I'm trying to find a nice(reasonable) score to report the problems for a given set of tickets.
A simple bar chart would be the best way to represent your data here (with assigned priority on the x-axis, and the y-axis representing the number of tickets for each priority). This presentation would pass the inter-ocular percussion test (a.k.a. "it hits you right between the eyes").