Given an array of stock quotes Q[0], ..., Q[n-1] in chronological order and corresponding volumes V[0], ..., V[n-1] as numbers of shares, how can you find the optimal times and volumes to buy or sell when the volumes of your trades are each limited by V[0], ..., V[n-1]?
I assume that you want to start and end with 0 shares in each stock and that you have unlimited capital.
The problem can be boiled down to buying at the lowest prices available and selling at the highest, with the side condition that buying a share has to be done prior to selling it.
I would process the data in time order and add purchases as long as there is available volumes with a higher price in the future (for each purchase you need to tick off the same amount of shares as sold using the highest future price available).
Continue to move forward in time add add buys as long as there is a profitable time to sell in the future. If there is surplus volume available volume but no available profitable selling spot in the future, you need to look back to see if the current price is lower than any purchases already made. In that case, exchange the most expensive shares from the past for the cheaper ones, but only if there is a future selling point available. Also check if there is any profitable selling point available for the scrapped purchase order.
Example:
Day Price Volume
1 100 1000
2 80 1000
3 110 1000
4 70 1000
5 120 2000
Day 1:
Purchase 1000 at 100 per share. Sell 1000 day 4 at 120.
Day 2:
Purchase 1000 at 80 per share. Sell 1000 day 4 at 120.
Day 3:
No available profitable selling opportunity because all future shares at prices above 70 are already booked!
Look back and see if you have purchased at prices above 110.
You havent, so there is no purchase.
Day 4:
No available profitable opportunity because all future volumes at prices above 70 are already booked!
Look back and see if you have purchased at prices above 70.
Replace purchase of 1000 shares day 1 with purchase of 1000 shares at 70 day 4.
Re-examine the shares of day one and check if there is any other profitable sale available (you only need to consider the timeline up to day 4).
There is, so purchase 1000 at 100 per share day 1 and sell them at 110 per share day 3.
The final order book is:
Day Price Volume Order type shares owned
1 100 1000 Buy 1000
2 80 1000 Buy 2000
3 110 1000 Sell 1000
4 70 1000 Buy 2000
5 120 2000 Sell 0
Total profit: 10000
Related
The owner of the store observes that, on average, 18 customers enter every hour, and that there are typically 8 customers in the store. How much time does each customer spend in the store on average?
How to relate to the formula t = n*CPI/f?
I'm playing a little with spot instances, and for example, in Databricks, I can ask for a spot instance with a minimum of % savings over On-Demand instances.
My question is, if I set 90% off the On-Demand instance and the current price is 50%, I will get the cheaper instance or is it like bidding and I get the spot for 90% price?
I have some use cases when the availability of one instance is not very important so will be good to get those at any discount.
Summing up, if I set a minimum of 90% I will always get the cheaper spot available?
Thanks!
As per the this article from databricks:
Databricks defaults to a bid price that is equal to the current on-demand price. So if the on-demand price for your instance type is $0.665 cents per hour, then the default bid price is also set to $0.665 cents per hour.
Recall that with Spot instances, you don’t necessarily pay your actual bid price - you pay the Spot market price for your instance type. So, even though you may have bid $0.665 cents, in reality you may only be charged $0.10 for the hour.
So its safe to assume that you will be charged whats the current spot market price but not more then what you set.
I would like to set up a backtesting strategy which allows me to trade up to a certain stock position such as 10000.
Therefore, I can go long or short up to 10000 but not over.
I have currently set up a backtesting strategy but I cannot work out how to stop it trading once it hits a limit.
If I buy 10000 then I should only be allowed to sell and not buy.
I have this:
df_tradable["Trade"].groupby(df_tradable["Date"]).cumsum()
This adds up all of my trades for a day.
(Trade is +1 or -1 depending if buy or sell)
I can set up another check which only adds up P&L when my day traded is less than 10000 however I want to be able to sell again.
Is there an easy way to set this up please?
Suppose there exists an application X.
Is DAU defined as either (a) the number of users that login X every single day over a specified period, or (b) the average total number of users that login X each day over a specified period?
For example:
Specified period = 5 days
The same 50 users login X everyday. In addition, a random number of users login X on top of this each day, say 20, 40, 10, 25, 30.
Does DAU = 50 or DAU = (70+90+60+75+80)/5
DAU is how many unique users visit the site daily. In other words, DAU lives only during a specific day, and not other specific period.
In your example, DAU for the first day is 50 users, the socond - 70 users, the third - 90 and so on.
DAU = (70+90+60+75+80)/5 is not a DAU, that is more likely an average value of DAU for 5 days;
as wel as 50 is DAU for the first day only, not for whole 5 days.
If you wanna calculate an Active Users index for a specific period, you may user Weekly Active Users (WAO) and Monthly Active Users (MAO) or, let's say, a [5 days] Active Users counters.
To calculate "[N Days]AU", you should measure it by counting the number of unique users during a specific measurement period, such as within the previous N days.
So, if User1 (and no one else) logins to the site every 5 days, you'll still have [N Days]AU = 1 for the site, because you have only 1 unique active user during this period.
I have a database with stock values in a table, for example:
id - unique id for this entry
stockId - ticker symbol of the stock
value - price of the stock
timestamp - timestamp of that price
I would like to create separate arrays for a timeframe of 24 hour, 7 days and 1 month from my database entries, each array containing datapoints for a stock chart.
For some stockIds, I have just a few data points per hour, for others it could be hundreds or thousands.
My question:
What is a good algorithm to "aggregate" the possibly many datapoints into a few - for example, for the 24 hours chart I would like to have at a maximum 10 datapoints per hour. How do I handle exceptionally high / low values?
What is the common approach in regards to stock charts?
Thank you for reading!
Some options: (assuming 10 points per hour, i.e. one roughly every 6 minutes)
For every 6 minute period, pick the data point closest to the centre of the period
For every 6 minute period, take the average of the points over that period
For an hour period, find the maximum and minimum for each 4 minutes period and pick the 5 maximum and 5 minimum in these respective sets (4 minutes is somewhat randomly chosen).
I originally thought to pick the 5 minimum points and the 5 maximum points such that each maximum point is at least 8 minutes apart, and similarly for minimum points.
The 8 minutes here is so we don't have all the points stacked up on each other. Why 8 minutes? At an even distribution, 60/5 = 12 minutes, so just moving a bit away from that gives us 8 minutes.
But, in terms of the implementation, the 4 minutes approach will be much simpler and should give similar results.
You'll have to see which one gives you the most desirable results. The last one is likely to give a decent indication of variation across the period, whereas the second one is likely to have a more stable graph. The first one can be a bit more erratic.