I would like to know how to calculate the cumulative sum in Anylogic. Specifically, I have a cyclic event that changes the value of a parameter every week. From this parameter I would like to calculate the cumulative sum of the values it received, how can I do that ?
The event is a Timeout with mode Cyclic. The action is:
"name_parameter"=round(max(normal(10,200),0));
Create a parameter with an initial value of 0. Call it sum. In the event action field use:
name_parameter = round(max(normal(10,200),0));
sum += name_parameter;
Related
We have to rent our car to customers. We have a list whose each element represent the time at which car will be given, second -> the time at which car will be returned and third -> the profit earned at that lending. So i need to find out the maximum profit that can be earned.
Eg:
( [1,2,20], [3,6,15], [2,8,25], [7,12,18], [13,31,22] )
The maximum profit earned is 75. [1,2] + [3,6] + [7,12] + [13,31].
We can have overlapping intervals. We need to select such case that maximizes our profit.
Assuming you have only one car, then the problem we are solving in Weighted Interval Scheduling
Let us assume we have intervals I0 , I1, I2, ....In-1 and Interval Ii is (si,ti,pi)
Algorithm :
First sort the Intervals on the basis of starting points si.
Create a array for Dynamic Programming, MaxProfit[i] represent the maximum profit you can make from intervals Ii,Ii+1,In-1.Initialise the last value
MaxProfit[n-1] = profit_of_(n-1)
Then using DP we can find the maximum profit as :
a. Either we can ignore the given interval, In this case our maximum profit will be the maximum profit we can gain from the remaining intervals
MaxProfit[i+1]
b. Or we can include this interval, In this case the maximum profit can be written as
profit_of_i + MaxProfit[r]
where r is the next Interval such that sr > ti
So our overall DP becomes
MaxProfit[i] = max{MaxProfit[i+1], profit_of_i + MaxProfit[r] }
Return the value of MaxProfit[0]
use something like dynamic programming.
at first sort with first element.
you have 2 rows, first show most earned if this time is used and another is for most earned if not used.
then you will put each task in the related period of time and see in each time part it is a good choice to have it or not.
take care that if intervals are legal we choose all of them.
Requirements of special counter
I want to implement a special counter: all increment operations time out after a fixed period of time (say 30 days).
An example:
Day 0: counter = 0. TTL = 30 days
Day 1: increment counter (+1)
Day 2: increment counter (+1)
Day 3: value of counter == 2
Day 31: value of counter == 1
Day 32: value of counter == 0
Naive solution
A naïve implementation is to maintain a set of timestamps, where each timestamp equals the time of an increment. The value of the counter equals the size of the set after subtracting all timestamps that have timed out.
This naïve counter has O(n) space (size of the set), has O(n) lookup and O(1) inserts. The values are exact.
Better solution (for me)
Trade speed and memory for accuracy.
I want a counter with O(1) lookup and insert, O(1) space. The accuracy < exact.
Alternatively, I would accept O(log n) space and lookup.
The counter representation should be suited for storage in a database field, i.e., I should be able to update and poll the counter rapidly without too much (de)serialization overhead.
I'm essentially looking for a counter that resembles a HyperLogLog counter, but for a different type of approximate count: decaying increments vs. number of distinct elements
How could I implement such a counter?
If you can live with 24 hour granularity then you can bucket your counter into k buckets where k is the number of days in your longest TTL.
Incrementing is an O(1) operation - simply increment the value in the bucket with index (k-TTL), as well as the current sum total.
Reading is another O(1) operation as you simply read the current sum total.
A cronjob pops off the now-expired bucket each night (and adds a bucket with value 0 at the opposite end) and decreases your counter by the sum in that bucket (this is a background task so it would not affect your insert or read operations)
Decaying counter based on annealing
Here is a counter that is based on annealing (implemented in Python).
The counter exponentially decays over time; controlled by the rate alpha
When you read and write the counter, you provide a time index (increment or read the counter at time t)
You can read the counter in the present and future (w.r.t. index of last increment), but not in the past
Time indices of sequential increments must be weakly monotonically increasing
The algorithm is exact w.r.t. the alternative formulation (annealing vs. TTL). It has O(1) increment and read. It consumes O(1) space, in fact just three floating point fields.
class AnnealingCounter():
def __init__(self, alpha=0.9):
self.alpha = alpha # rate of decay
self.last_t = .0 # time of last increment
self.heat = .0 # value of counter at last_t
def increment(self, t=None, amount=1.0):
"""
t is a floating point temporal index.
If t is not provided, the value of last_t is used
"""
if t is None: t = self.last_t
elapsed = t - self.last_t
if elapsed < .0 :
raise ValueError('Cannot increment the counter in the past, i.e. before the last increment')
self.heat = amount + self.heat * (self.alpha ** elapsed)
self.last_t = t
def get_value(self, t=None):
"""
t is a floating point temporal index.
If t is not provided, the value of last_t is used
"""
if t is None: t = self.last_t
elapsed = t - self.last_t
if elapsed < .0 :
raise ValueError('Cannot increment the counter in the past, i.e. before the last increment')
return self.heat * (self.alpha ** elapsed)
def __str__(self):
return str('Counter value at time {}: {}'.format(self.last_t, self.heat))
def __repr__(self):
return self.__str__()
Here is how to use it:
>>> c = AnnealingCounter(alpha=0.9)
Counter has value 0.0 at time 0.0
>>> c.increment() # increment by 1.0, but don't move time forward
Counter has value 1.0 at time 0.0
>>> c.increment(amount=3.2, t=0.5) # increment by 3.2 and move time forward (t=0.5)
Counter has value 4.14868329805 at time 0.5
>>> c.increment() # increment by 1.0, but don't move time forward
Counter has value 5.14868329805 at time 0.5
>>> c.get_value() # get value as after last increment (t=0.5)
5.148683298050514
>>> c.get_value(t=2.0)
4.396022866630942 # get future value (t=2.0)
Since the increments expire in the same order as they happen, the timestamps form a simple queue.
The current value of the counter can be stored separately in O(1) additional memory. At the start of each operation (insert or query), while the front of the queue is expired, it's popped out of the queue, and the counter is decreased.
Note that each of the n timestamps is created and popped out once. Thus you have O(1) amortized time to access the current value, and O(n) memory to store the non-expired timestamps. The actual highest memory usage is also limited by the ratio of TTL / frequency of new timestamp insertions.
I have following java lines of code, i am wondering if it can be totally converted to Java 8 Stream fashion?
long totalSum = list.parallelStream().mapToLong(ExpenseInfo::getCurrCount).sum();
// LOOP ALL COLLECTION
for (ExpenseInfo info : list) {
totalSum -= info.getCurrCount();
info.setBurnCount(totalSum);
}
The thing is that your task is sequential by default. For example, the first element will have its burn count set to the total sum minus its own count while the second element will have its burn count set to the total sum minus its own count and the count of the previous elements and so on.
That is, if you would have to turn this sequence of instructions using a stream and that the stream would be parallel (using .stream() or .parallelStream() should have no consequence on the result you compute), the totalSum variable would be shared and every element in the ordered stream would have to wait the updated value of the total sum computed by every previous element, which would defeat entirely the purpose of using parallelism.
That said, you can use another approach that would first map each instance to its own curr count and then use Arrays.parallelPrefix to compute the cumulated sum into an array.
Finally, you can use an IntStream to set the burr count of each element by subtracting the total sum and the cumulated sum for that element.
long[] sums = list.stream().mapToLong(ExpenseInfo::getCurrCount).toArray();
Arrays.parallelPrefix(sums, Long::sum);
IntStream.range(0, sums.length).forEach(i -> list.get(i).setBurnCount(sums[sums.length - 1] - sums[i]));
Of course, it supposes that the list is random access so that get(int i) is not an expensive operation, but everything can be parallelized without problems (in fact parallelPrefix is already parallelized, as its name suggests).
I would still keep your approach in first place, it seems clearer to me.
I need to implement an event (stock loss error) that occurs between time intervals as a renewal process. With every day of non-occurrence the probability of occurrence at the other day increases based on an exponential distribution: "The time intervals are based on an exponential distribution with a mean time between stock loss events (TBSLE). The frequency of (stock loss-)occurrence is the reciprocal of TBSLE. The expected value for the mean stock loss quantity can be estimated as 2.05."
First try:
def stockLossError(self):
stockLossErrorProbability = 0
inverseLambda =
errors = 0
randomnumber = np.random.exponential(scale=inverseLambda,size=(1,1))
if(randomnumber > stockLossErrorProbability):
self.daysSinceLastError += 1
self.errors += 2.05
Intv Q:
In a client-server architecture, there are multiple requests from multiple clients to the server. The server should maintain the response times of all the requests in the previous hour. What data structure and algo will be used for this? Also, the average response time needs to be maintained and has to be retrieved in O(1).
My take:
algo: maintain a running mean
mean = mean_prev *n + current_response_time
-------------------------------
n+1
DS: a set (using order statistic tree).
My question is whether there is a better answer. I felt that my answer is very trivial and the answer to the questions(in the interview) before this one and after this one where non trivial.
EDIT:
Based on what amit suggested:
cleanup()
while(queue.front().timestamp-curr_time > 1hr)
(timestamp,val)=queue.pop();
sum=sum-val
n=n-1;
insert(timestamp,value)
queue.push(timestamp,value);
sum=sum+val
n=n+1;
cleanup();
query_average()
cleanup();
return sum/n;
And if we can ensure that cleanup() is triggered once every hour or half an hour, then query_average() will not take very long. But if someone were to implement timer trigger for a function call, how would they do it?
The problem with your solution is it only takes the total average since the beginning of time, and not for the last one hour, as you supposed to.
To do so, you need to maintain 2 variables and a queue of entries (timestamp,value).
The 2 variables will be n (the number of elements that are relevant to the last hours) and sum - the sum of the elements from the last hour.
When a new element arrives:
queue.add(timestamp,value)
sum = sum + value
n = n+1
When you have a query for average:
while (queue.front().timestamp > currentTimeAtamp() - 1 hour):
(timestamp,value) = queue.pop()
sum = sum - value
n = n-1
return sum/n
Note that the above is still O(1) on average, because for every insertion to the queue - you do exactly one deletion. You might add the above loop to the insertion procedure as well.