Customer CPU store average time - cpu

The owner of the store observes that, on average, 18 customers enter every hour, and that there are typically 8 customers in the store. How much time does each customer spend in the store on average?
How to relate to the formula t = n*CPI/f?

Related

Using day trading limits

I would like to set up a backtesting strategy which allows me to trade up to a certain stock position such as 10000.
Therefore, I can go long or short up to 10000 but not over.
I have currently set up a backtesting strategy but I cannot work out how to stop it trading once it hits a limit.
If I buy 10000 then I should only be allowed to sell and not buy.
I have this:
df_tradable["Trade"].groupby(df_tradable["Date"]).cumsum()
This adds up all of my trades for a day.
(Trade is +1 or -1 depending if buy or sell)
I can set up another check which only adds up P&L when my day traded is less than 10000 however I want to be able to sell again.
Is there an easy way to set this up please?

Best data structure to find price difference between two time frames

I am working on a project where my task is to find the % price change given 2 time frames for 100+ stocks in an efficient way.
The time frames are pre defined and can only be 5 mins, 10 mins, 30 mins, 1 hour, 4 hour, 12 hour, 24 hours.
Given a time frame, I need a way to efficiently figure out the % price change of all the stocks that I am tracking.
As per the current implementation, I am getting price data for those stocks every second and dumping the data to a price table.
I have another cron job which updates the % change of the stock based on the values in the price table every few seconds.
The solution is kind of in working state but its not efficient. Is there any data structure/ algorithm that I can use to find the % change efficiently?

Distribute user active time blocks subject to total constraint

I am building an agent-based model for product usage. I am trying to develop a function to decide whether the user is using the product at a given time, while incorporating randomness.
So, say we know the user spends a total of 1 hour per day using the product, and we know the average distribution of this time (e.g., most used at 6-8pm).
How can I generate a set of usage/non-usage times (i.e., during each 10 minute block is the user active or not) while ensuring that at the end of the day the total active time sums to one hour.
In most cases I would just run the distributor without concern for the total, and then at the end normalize by making it proportional to the total target time so the total was 1 hour. However, I can't do that because time blocks must be 10 minutes. I think this is a different question because I'm really not computing time ranges, I'm computing booleans to associate with different 10 minute time blocks (e.g., the user was/was not active during a given block).
Is there a standard way to do this?
I did some more thinking and figured it out, if anyone else is looking at this.
The approach to take is this: You know the allowed number n of 10-minute time blocks for a given agent.
Iterate n times, and on each iteration select a time block out of the day subject to your activity distribution function.
Main point is to iterate over the number of time blocks you want to place, not over the entire day.

Concurrent User Calculation

I am trying to calculate the average concurrent user using the below formula
Average Concurrent Users = Visits per hour / (60 min/hour / average visit)
Visit Per Hour is 750
Average Visit is 1.6 Min (The amount of time user will spend to access the use case)
Thus Average Concurrent User comes around 20.
Now I made some performance improvement and the Average Visit comes down to 1.2 minutes. Thus I again use the formula to calculate the Average Concurrent Users, which comes around 15.
Logically when we do any performance improvement the concurrent users should increase rather than decrease. Is there any problem with the calculation.
You are correct. Concurrent user sessions will decrease if the average session time decreases and all else remains the same. This can be a good thing, if users are able to do their business more quickly and get on with their lives.
For performance tuning and capacity planning, measuring concurrent sessions is much less useful than raw requests per second (throughput) and average or median response time (latency).
Think about it this way: when a user is reading a web page they downloaded, the server isn't doing anything. While 1,000 users are reading pages, the server still isn't doing anything. The only parts of a user session that matter are between the click and the response.

governor limits with reports in SFDC

We have a business requirement to show a cost summary for all our projects in a single table.
In order to tabulate these costs we have to query through all the client tasks, regions, job roles, pay rates, cost tables, deliverables, efforts, and hour records (client tasks are in the same table and tasks and regions are in the same table and deliverables, effort, and hours are stored as monthly totals).
Since I have to query all of this before I go for-looping through everything it hits a large number of scripting lines very quickly. Computationally it's like O(m * n * o * p) and some of our projects have all four variables that go up very quickly. My estimates for how to do this have ranged from 90 million lines of code to 600 billion.
Using batch apex we could break this up by task regions into 200 batches, but that would reduce the computational profile to (600 B / 200 ) = 3 billion lines of code (well above the salesforce limit.
We have been playing around with using informatica to do these massive calculations, but we have several problems including (1) our end users can not wait more than five or so minutes, but just transferring the data (90% of all records if all the projects got updated at once) would take 15 minutes over informatica or the web api (2) we have noticed these massive calculations need to happen in several places (changing a deliverable forecast value, creating an initial forecast, etc).
Is there a governor limit work around that will meet our requirements here (massive volume of data with response in 5 or so minutes? Is force.com a good platform for us to use here?
This is the way I've been doing it for a similar calculation:
An ERD would help, but have you considered doing this in smaller pieces and with reports in salesforce instead of custom code?
By smaller pieces I mean, use roll-up summary fields to get some totals higher in your tree of objects.
Or use apex triggers so as hours are entered the cost * hours is calculated and placed onto the time record, and then rolled-up to the deliverables.
Basically get your values calculated at the time the data is entered instead of having to run your calculations every time.
Then you can simply run a report that says show me all my projects and their total cost or total time because those total costs/times are stored/calculated already.
Roll-up summaries only work with master-detail
Triggers work anytime, but you'll want to account for insert, update as well as delete and undelete! Aggregate Functions will be your friend assuming that the trigger context has fewer than 50,000 records to aggregate - which I'd hope it does b/c if there are more than 50,000 time entries for a single deliverable that's a BIG deliverable :)
Hope that helps a bit?

Resources