Using day trading limits - limit

I would like to set up a backtesting strategy which allows me to trade up to a certain stock position such as 10000.
Therefore, I can go long or short up to 10000 but not over.
I have currently set up a backtesting strategy but I cannot work out how to stop it trading once it hits a limit.
If I buy 10000 then I should only be allowed to sell and not buy.
I have this:
df_tradable["Trade"].groupby(df_tradable["Date"]).cumsum()
This adds up all of my trades for a day.
(Trade is +1 or -1 depending if buy or sell)
I can set up another check which only adds up P&L when my day traded is less than 10000 however I want to be able to sell again.
Is there an easy way to set this up please?

Related

Configure Amazon maximum percentage of OnDemand price (spot instances)

I'm playing a little with spot instances, and for example, in Databricks, I can ask for a spot instance with a minimum of % savings over On-Demand instances.
My question is, if I set 90% off the On-Demand instance and the current price is 50%, I will get the cheaper instance or is it like bidding and I get the spot for 90% price?
I have some use cases when the availability of one instance is not very important so will be good to get those at any discount.
Summing up, if I set a minimum of 90% I will always get the cheaper spot available?
Thanks!
As per the this article from databricks:
Databricks defaults to a bid price that is equal to the current on-demand price. So if the on-demand price for your instance type is $0.665 cents per hour, then the default bid price is also set to $0.665 cents per hour.
Recall that with Spot instances, you don’t necessarily pay your actual bid price - you pay the Spot market price for your instance type. So, even though you may have bid $0.665 cents, in reality you may only be charged $0.10 for the hour.
So its safe to assume that you will be charged whats the current spot market price but not more then what you set.

I have got an requirement for load testing . i was given both avg. and peak TPH. also avg and peak user load

I have to do performance testing of an ecommerce application where i got the details needed like Avg TPH and peak TPH . also Avg User and Peak User.
for e.g., an average of 1000 orders/hour, the peak of 3000 orders/hour during the holiday season, expected to grow to 6000 orders/hour next holiday season.
I was afraid which value to be considered for current users and TPH for performing load test for an hour.
also what load will be preferable foe stress testing and scalability testing.
It would be a great helpful not only for the test point of view but also will help me in understanding the conceptually which would help me in a great deal down the lane.
This is a high business risk endeavor. Get it wrong and your ledger doesn't go from red to black on the day after thanksgiving, plus you have a high probability of winding up with a bad public relations event on Twitter. Add to that greater than 40% of people who hit a website failure will not return.
That being said, do your skills match the risk to the business. If not, the best thing to do is to advise your management to acquire a higher skilled team. Then you should shadow them in all of their actions.
I think it helps to have some numbers here. There are roughly 35 days in this year's holiday shopping season. This translates to 840 hours.
#$25 avg sale, you are looking at revenue of $21 million
#$50 avg sale, ...42 Million
#100 avg sale, ...84 Million
Numbers based upon the average of 1000 sales per hour over 840 hours.
Every hour of downtime at peak costs you
#$25 avg sale, ...$75K
#$50 avg sale, ...$150K
#$100 avg sale, ...$300K
Numbers based upon 3000 orders per hour at peak. If you have downtime then greater than 40% of people will not return based upon latest studies. And you have the Twitter affect where people complain loudly and draw off potential site visitors.
I would advise you to bring in a team. Act fast, the really good engineers are quickly being snapped up for Holiday work. These are not numbers to take lightly nor is it work to press someone into that hasn't done it before.
If you are seriously in need and your marketing department knows exactly how much increased conversion they get from a faster website, then I can find someone for you. They will do the work upfront at no charge, but they will charge a 12 month residual based upon the decrease in response time and the increased conversion that results
Normally Performance Testing technique is not limited to only one scenario, you need to run different performance test types to assess various aspects of your application.
Load Testing - which goal is to check how does your application behave under anticipated load, in your case it would be simulating 1000 orders per hour.
Stress Testing - putting the application under test under maximum anticipated load (in your case 3000 TPH). Another approach is gradually increasing the load until response time starts exceeding acceptable thresholds or errors start occurring (whatever comes the first) or up to 6000 TPH if you don't plan to scale up. This way you will be able to identify the bottleneck and determine what will be the component which fails which could be in
lack of hardware power
problems with database
inefficient algorithms used in your application
You can also consider executing a Soak Test - putting your application under prolonged load, this way you will be able to catch the majority of memory leaks

Distribute user active time blocks subject to total constraint

I am building an agent-based model for product usage. I am trying to develop a function to decide whether the user is using the product at a given time, while incorporating randomness.
So, say we know the user spends a total of 1 hour per day using the product, and we know the average distribution of this time (e.g., most used at 6-8pm).
How can I generate a set of usage/non-usage times (i.e., during each 10 minute block is the user active or not) while ensuring that at the end of the day the total active time sums to one hour.
In most cases I would just run the distributor without concern for the total, and then at the end normalize by making it proportional to the total target time so the total was 1 hour. However, I can't do that because time blocks must be 10 minutes. I think this is a different question because I'm really not computing time ranges, I'm computing booleans to associate with different 10 minute time blocks (e.g., the user was/was not active during a given block).
Is there a standard way to do this?
I did some more thinking and figured it out, if anyone else is looking at this.
The approach to take is this: You know the allowed number n of 10-minute time blocks for a given agent.
Iterate n times, and on each iteration select a time block out of the day subject to your activity distribution function.
Main point is to iterate over the number of time blocks you want to place, not over the entire day.

Parse pricing and requests per second

Parse now allows us to send 30 requests/second, but it is not straightforward to me.
Quoting some info gathered:
Here they say
At 30 requests/sec, an app can send us 77.76 million API requests in
a month before needing to pay a dime.
So I suppose he meant
send up to 77.76 million
Here, they suggest the rate of requests/second is calculated in a small window, generally a minute. This was answered about 2 years ago.
On their pricing faq page they give an example:
if an app is set to 30 requests/second, your app will hit its request
limit once it makes more than 1,800 requests over a 60 second period.
Suggesting that the window is one minute, even though they didn't clearly state it.
What intrigues me is that they say:
Pricing is pro-rated by the hour.
What does it mean? (sorry if it's obvious, English is not my first language)
Has anyone actually used parse and kept track of those request/second and burst/limits?
The only resource I found was a guy saying he had a web application with 10,000 users/day staying in the website around 4min, and he had under 12r/s.
Moreover, if my app logs users' activities, would that be a good practice to cache this info then send it in random times like between from 3am-7am?
Any help is very appreciated. My company is deciding whether go forward with Parse or not, so it's very important.
They could have worded it better but it basically means the same as "We'll charge you for a minimum of 1 hour based on the request limit you have set".
Here's an example. Assume you are using a 40 rps settings ($100/month which is $100/720 hours). If you keep this for 1 minute, you'll be charged for 1 hour, roughly $1.40.
You can change the request limit as often as you want. So if your app/site receives peak traffic for only 12 hours/day, you can increase the limit just for those 12 hours and end up paying just for those 12 hours.
Check the third question (How frequently can I increase/decrease my request limit?) on the FAQ page at https://parse.com/plans/faq
How frequently can I increase/decrease my request limit?
You can increase/decrease your request/limit as frequently as you would like
within a given month. We will prorate your charges on an hourly basis.
Not really clear what the pro rating means as I understand the setting to be an explicit limit that you pay for. If your limit is exceeded then the requests fail. I don't think there's an option to allow for payment on demand when the limit is exceeded but pro rating would do that.
The one minute is accurate and that is the current limit management.
The point of the pricing model is that your service should be making money before you reach any of the limits. If you have enough users to hit the limits and you aren't making money then you need to reconsider your business plan. As such you shouldn't need to upload at random times of day as your users should naturally spread out a bit.
Here is something that can help you understand Parse's users request per second.
Parse estimates that the average app's active user will issue 10 requests. Thus, if you had a million users on a particular day, and their traffic was evenly spread throughout the day, you could estimate your app would need about 10,000,000 total API requests, or about 120 requests per second. Every app is different, therefore Parse encourages you to measure how many requests your users send.
You can read more this question answered directly from Parse staff here on Parse/F&A link
Hope this helps

Maximizing profits for given stock quotes and volumes

Given an array of stock quotes Q[0], ..., Q[n-1] in chronological order and corresponding volumes V[0], ..., V[n-1] as numbers of shares, how can you find the optimal times and volumes to buy or sell when the volumes of your trades are each limited by V[0], ..., V[n-1]?
I assume that you want to start and end with 0 shares in each stock and that you have unlimited capital.
The problem can be boiled down to buying at the lowest prices available and selling at the highest, with the side condition that buying a share has to be done prior to selling it.
I would process the data in time order and add purchases as long as there is available volumes with a higher price in the future (for each purchase you need to tick off the same amount of shares as sold using the highest future price available).
Continue to move forward in time add add buys as long as there is a profitable time to sell in the future. If there is surplus volume available volume but no available profitable selling spot in the future, you need to look back to see if the current price is lower than any purchases already made. In that case, exchange the most expensive shares from the past for the cheaper ones, but only if there is a future selling point available. Also check if there is any profitable selling point available for the scrapped purchase order.
Example:
Day Price Volume
1 100 1000
2 80 1000
3 110 1000
4 70 1000
5 120 2000
Day 1:
Purchase 1000 at 100 per share. Sell 1000 day 4 at 120.
Day 2:
Purchase 1000 at 80 per share. Sell 1000 day 4 at 120.
Day 3:
No available profitable selling opportunity because all future shares at prices above 70 are already booked!
Look back and see if you have purchased at prices above 110.
You havent, so there is no purchase.
Day 4:
No available profitable opportunity because all future volumes at prices above 70 are already booked!
Look back and see if you have purchased at prices above 70.
Replace purchase of 1000 shares day 1 with purchase of 1000 shares at 70 day 4.
Re-examine the shares of day one and check if there is any other profitable sale available (you only need to consider the timeline up to day 4).
There is, so purchase 1000 at 100 per share day 1 and sell them at 110 per share day 3.
The final order book is:
Day Price Volume Order type shares owned
1 100 1000 Buy 1000
2 80 1000 Buy 2000
3 110 1000 Sell 1000
4 70 1000 Buy 2000
5 120 2000 Sell 0
Total profit: 10000

Resources