I'm playing a little with spot instances, and for example, in Databricks, I can ask for a spot instance with a minimum of % savings over On-Demand instances.
My question is, if I set 90% off the On-Demand instance and the current price is 50%, I will get the cheaper instance or is it like bidding and I get the spot for 90% price?
I have some use cases when the availability of one instance is not very important so will be good to get those at any discount.
Summing up, if I set a minimum of 90% I will always get the cheaper spot available?
Thanks!
As per the this article from databricks:
Databricks defaults to a bid price that is equal to the current on-demand price. So if the on-demand price for your instance type is $0.665 cents per hour, then the default bid price is also set to $0.665 cents per hour.
Recall that with Spot instances, you don’t necessarily pay your actual bid price - you pay the Spot market price for your instance type. So, even though you may have bid $0.665 cents, in reality you may only be charged $0.10 for the hour.
So its safe to assume that you will be charged whats the current spot market price but not more then what you set.
How can I calculate and cluster the Risk percentage and clustering them if the only details available is Profit per customer and number or orders?
I have to do performance testing of an ecommerce application where i got the details needed like Avg TPH and peak TPH . also Avg User and Peak User.
for e.g., an average of 1000 orders/hour, the peak of 3000 orders/hour during the holiday season, expected to grow to 6000 orders/hour next holiday season.
I was afraid which value to be considered for current users and TPH for performing load test for an hour.
also what load will be preferable foe stress testing and scalability testing.
It would be a great helpful not only for the test point of view but also will help me in understanding the conceptually which would help me in a great deal down the lane.
This is a high business risk endeavor. Get it wrong and your ledger doesn't go from red to black on the day after thanksgiving, plus you have a high probability of winding up with a bad public relations event on Twitter. Add to that greater than 40% of people who hit a website failure will not return.
That being said, do your skills match the risk to the business. If not, the best thing to do is to advise your management to acquire a higher skilled team. Then you should shadow them in all of their actions.
I think it helps to have some numbers here. There are roughly 35 days in this year's holiday shopping season. This translates to 840 hours.
#$25 avg sale, you are looking at revenue of $21 million
#$50 avg sale, ...42 Million
#100 avg sale, ...84 Million
Numbers based upon the average of 1000 sales per hour over 840 hours.
Every hour of downtime at peak costs you
#$25 avg sale, ...$75K
#$50 avg sale, ...$150K
#$100 avg sale, ...$300K
Numbers based upon 3000 orders per hour at peak. If you have downtime then greater than 40% of people will not return based upon latest studies. And you have the Twitter affect where people complain loudly and draw off potential site visitors.
I would advise you to bring in a team. Act fast, the really good engineers are quickly being snapped up for Holiday work. These are not numbers to take lightly nor is it work to press someone into that hasn't done it before.
If you are seriously in need and your marketing department knows exactly how much increased conversion they get from a faster website, then I can find someone for you. They will do the work upfront at no charge, but they will charge a 12 month residual based upon the decrease in response time and the increased conversion that results
Normally Performance Testing technique is not limited to only one scenario, you need to run different performance test types to assess various aspects of your application.
Load Testing - which goal is to check how does your application behave under anticipated load, in your case it would be simulating 1000 orders per hour.
Stress Testing - putting the application under test under maximum anticipated load (in your case 3000 TPH). Another approach is gradually increasing the load until response time starts exceeding acceptable thresholds or errors start occurring (whatever comes the first) or up to 6000 TPH if you don't plan to scale up. This way you will be able to identify the bottleneck and determine what will be the component which fails which could be in
lack of hardware power
problems with database
inefficient algorithms used in your application
You can also consider executing a Soak Test - putting your application under prolonged load, this way you will be able to catch the majority of memory leaks
I would like to set up a backtesting strategy which allows me to trade up to a certain stock position such as 10000.
Therefore, I can go long or short up to 10000 but not over.
I have currently set up a backtesting strategy but I cannot work out how to stop it trading once it hits a limit.
If I buy 10000 then I should only be allowed to sell and not buy.
I have this:
df_tradable["Trade"].groupby(df_tradable["Date"]).cumsum()
This adds up all of my trades for a day.
(Trade is +1 or -1 depending if buy or sell)
I can set up another check which only adds up P&L when my day traded is less than 10000 however I want to be able to sell again.
Is there an easy way to set this up please?
Imagine we have a smart-house and want power up devices in a way to spent less money on electricity.
For every device we know how many hours it should work(continuously) and amount of energy consumed. We assume that each device fulfills one continuous cycle every day.
Moreover we have the maximum amount of electricity that can be consumed in every hour (as a sum of electricity consumed each power up device right now).
Finally we have a cost of electricity for every hour.
Why is the best algorithm for minimizing the money spent on electricity?
Would like to hear any ideas.