I have this problem, where I am given n accounts, each of which store a percentage of the user's net worth. I have to move it from the current distribution to a user inputted distribution. To actually move the money, I call method moveMoney(initialAcct, finalAcct, amnt), which I have to call the least number of times possible.
This looks like a classic algo problem, but nothing comes to mind on how to solve it. It seems a little like packing optimization, but I can't seem to get it.
Consider when there are at least two ways of covering the missing amount in any one account but there's a benefit to choosing one over the other. For example,
index: 0 1 2 3 4 5 6 7
input: 0 1 6 3 4 0 0 0
target: 7 0 0 0 0 1 1 5
needed: 7 -1 -6 -3 -4 1 1 5
If we choose to cover amount 7 with (1, 6), we get
amount index
1 -> 0
6 -> 0
3 -> 5
1 -> 6 // from index 5 to 6
1 -> 7 // from index 5 to 7
4 -> 7
6 transfers.
Whereas if we choose to cover amount 7 with (3, 4), we get
amount index
3 -> 0
4 -> 0
1 -> 5
6 -> 6
5 -> 7 // from index 6 to 7
5 transfers.
We can therefore see that for any one state, as we transition to the output, we may need to consider all possible exact covers. To the extent that different overages can lead to more states with potential exact covers, even the choices to create transitional overages are not guaranteed to be interchangeable.
One thing is certain, though: we want to move all overages to deficits.
Related
I have a dataset that looks like the below. I have a visualization (funnel chart) that shows the steps, the counts, and the % of total. The problem is the labels are too large and get truncated by the chart. Thus I'm looking to recreate a table right below this visual with the data labels, essentially.
Step
Counts
1
1234
2
1000
3
753
4
342
The question is: How do I recreate the % of Total calculation? Every step should be divided by the step 1 count to get % of total, and I can't figure it out using sumOvers, sumIfs, etc. Ideal output is below
Step
Counts
Perc
1
1000
100%
2
900
90%
3
700
70%
4
300
30%
I'm learning Raft, and I already know the basic mechanism of Raft.
When a Leader is elected, it is responsible to update the Followers' log to the Leader's one. When updating a Follower, it finds the first matched <entry, term> backwards, and update the Follower with the following logs.
How does Raft guarantee the logs of the Leader and the Follower before the matched <entry, term> are the same? Will this case happen:
|
Leader v
Entry : 1 2 3 4 5 6 7 8 9 10
Term : 1 1 1 2 2 3 3 3 3 3
Follower
Entry : 1 2 3 4 5 6 7
Term : 1 1 1 1 2 3 3
This property of the Raft algorithm is called Log Matching.
If two logs contain an entry with the same index and term, then the
logs are identical in all entries up through the given index
This holds because:
When sending an AppendEntries RPC, the leader includes the index and
term of the entry in its log that immediately precedes the new
entries. If the follower does not find an entry in its log with the
same index and term, then it refuses the new entries. The consistency
check acts as an induction step: the initial empty state of the logs
satisfies the Log Matching Property, and the consistency check
preserves the Log Matching Property whenever logs are extended. As a
result, whenever AppendEntries returns successfully, the leader knows
that the follower’s log is identical to its own log up through the new
entries.
Source https://raft.github.io/raft.pdf
I'm hoping that someone can just confirm my understanding of how the resource manager works...
If I've got a 4 node RAC with 2 consumer groups and 2 services, the services send each consumer group to one node only i.e. consumer group 1 ALWAYS gets sent to node 1 and 2 and consumer group 2 ALWAYS gets sent to node 3 and 4.
If I've got a tiered resource plan such as:
Group Name | L0 | L1 | max
Group 1 | 75% | 0 | 80%
Group 2 | 0 | 75% | 80%
Am I right in saying that as group 1 is on nodes 1 and 2 and group 2 is on nodes 3 and 4, they will each have 75% of resources available on their respective nodes? and both be limited to 80% on the node they are running on?
i.e. Resources are constricted and calculated on a per node basis and not a per cluster.
So just because a connection on node 1 group 1 is using 80% of resources, another connection on node 2 group 1 will have up to 80% available to it and not 0%.
And similarly if group 1 is using its allocated maximum, group 2 will also get its full share on nodes 3 and 4 as group 1 which is of higher priority isn't running on those nodes.
I've had a response from Oracle Support:
Resource management's limits are applied per node except PARALLEL_TARGET_PERCENTAGE, so for your example, you are right.
Connections in consumer group 2 only ever hit node 2 (due to the
services), group 2 will get a minimum of 75% of resources on the 2nd
node and potentially 100% if no max limit has been set or 80% if the max limit has been set.
When does MFU (Most Frequently Used) page replacement algorithm have better performance than LRU (Least Frequently Used)? When is it worse than LRU?
Where can I find information beyond the basic definition of the MFU page replacement algorithm?
Typically, I've seen an MFU cache used as the primary, backed by a secondary cache that uses an LRU replacement algorithm (an MRU cache). The idea is that the most recently used things will remain in the primary cache, giving very quick access. This reduces the "churn" that you see in an MRU cache when a small number of items are used very frequently. It also prevents those commonly used items from being evicted from the cache just because they haven't been used for a while.
MFU works well if you have a small number of items that are referenced very frequently, and a large number of items that are referenced infrequently. A typical desktop user, for example, might have three or four programs that he uses many times a day, and hundreds of programs that he uses very infrequently. If you wanted to improve his experience by caching in memory programs so that they will start quickly, you're better off caching those things that he uses very frequently.
On the other hand, if you have a large number of items that are referenced essentially randomly, or some items are accessed slightly more often than, or items are typically referenced in batches (i.e. item A is accessed many times over a short period, and then not at all), then an LRU cache eviction scheme will likely be better.
Least Recently Used (LRU) Page Replacement Algorithm
In this algorithm, the page that has not been used for the longest period of time has to be replaced.
Advantages of LRU Page Replacement Algorithm:
It is amenable to full statistical analysis.
Never suffers from Belady’s anomaly.
Most Frequency (MFU) Used Page Replacement Algorithm
Actually MFU algorithm thinks that the page which was used most frequently will not be needed immediately so it will replace the MFU page
Example: consider the following reference string:7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Buffer size:3
String :7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 0 4 2 2 0 0 2 2 2 0 0 7 7 7
0 0 0 0 3 3 3 3 3 3 3 3 3 3 3 3 3 0 0
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
I had been struggling to find a use case for MFU, everywhere MFU is confused with MRU. The most cited use case of MFU is:
The most frequently used (MFU) page-replacement algorithm is based on
the argument that the page with the smallest count was probably just
brought in and has yet to be used.
But it can clearly be understood that they are talking about MRU - most recently used cache.
What I could find was a paper which described using both MFU and LFU, most frequently used references are moved to primary cache for faster access and least frequently used references are moved to secondary cache. That's the only use case I could find for MFU.
In this web page: CS302 --- External Sorting
Merge the resulting runs together into successively bigger runs, until the file is sorted.
As I quoted, how can we merge the resulting runs together??? We don't have that much memory.
Imagine you have the numbers 1 - 9
9 7 2 6 3 4 8 5 1
And let's suppose that only 3 fit in memory at a time.
So you'd break them into chunks of 3 and sort each, storing each result in a separate file:
279
346
158
Now you'd open each of the three files as streams and read the first value from each:
2 3 1
Output the lowest value 1, and get the next value from that stream, now you have:
2 3 5
Output the next lowest value 2, and continue onwards until you've outputted the entire sorted list.
If you process two runs A and B into some larger run C you can do this line-by-line generating progressively larger runs, but still only reading at most 2 lines at a time. Because the process is iterative and because you're working on streams of data rather than full cuts of data you don't need to worry about memory usage. On the other hand, disk access might make the whole process slow -- but it sure beats not being able to do the work in the first place.