I'm trying to figure out how memory limits work and how to choose the right values.
My test server (VM) has 16GB of RAM and 4 vCPUs but it is a shared server, so I choose to use only 2 vCPUs and 2GB of RAM.
I look in the official documentation, and I calculate how many workers and RAM I need (https://www.odoo.com/documentation/14.0/administration/install/deploy.html#worker-number-calculation) .
W = Workers (workers)
2 workers for 1 CPU
CW = Cron Workers (max_cron_threads)
TW = W + CW
Worker number calculation
(#CPU * 2) + CW
(2 * 2) + 1 = 5 theorical maximal workers
Memory size calculation
Needed RAM = W * ( (light_worker_ratio * light_worker_ram_estimation) + (heavy_worker_ratio * heavy_worker_ram_estimation) )
5 * ((0.8 * 150) + (0.2 * 1024)) = 1624 (~2GB of RAM).
Ok, now, I go to the "configuration sample" (https://www.odoo.com/documentation/14.0/administration/install/deploy.html#id5) and I see I need to estimate how many concurrent users I'll have.
Can you confirm that the number of concurrent users includes all website visitors and not only the connected users?
In the configuration sample, how do you calculate/estimate the value of the limit? (limit_memory_hard, limit_memory_soft, limit_request, limit_time_cpu, limit_time_real)
I've read a lot of documentations (official or not), but they never say how to calculate these values.
Examples:
https://github.com/DocCyblade/tkl-odoo/issues/49 (I really don't understand how DocCyblade finds its values with its formula)
https://github.com/DocCyblade/tkl-odoo/blob/master/overlay/etc/odoo/openerp-server.conf
https://linuxize.com/post/how-to-install-odoo-14-on-ubuntu-20-04/
https://www.rosehosting.com/blog/how-to-speed-up-odoo/. 2048 is the default value since Odoo 10, not 640. If I try its formula, I will find that :
limit memory soft : 5 * 2147483648 = 10737418240
limit memory hard : 5 * 2684354560 = 13421772800
Can you help me, please?
Thanks
Related
This is a question from a Computer Architecture exam and I don't understand how to get to the correct answer.
Here is the question:
This question deals with main and cache memory only.
Address size: 32 bits
Block size: 128 items
Item size: 8 bits
Cache Layout: 6 way set associative
Cache Size: 192 KB (data only)
Write policy: Write Back
What is the total number of cache bits?
In order to get the number of tag bits, I find that 7 bits of the address are used for byte offset (0-127) and 8 bits are used for the block number (0-250) (250 = 192000/128/6), therefore 17 bits of the address are left for the tag.
To find the total number of bits in the cache, I would take (valid bit + tag size + bits per block) * number of blocks per set * number of sets = (1 + 17 + 1024) * 250 * 6 = 1,536,000. This is not the correct answer though.
The correct answer is 1,602,048 total bits in the cache and part of the answer is that there are 17 tag bits. After trying to reverse engineer the answer, I found that 1,602,048 = 1043 * 256 * 6 but I don't know if that is relevant to the solution because I don't know why those numbers would be used.
I'd like if someone could explain what I did wrong in my calculation to get a different answer.
From the URL https://developer.ibm.com/hadoop/2017/06/30/deep-dive-yarn-cgroups/ we see the following CPU limit formulas for yarn cgroups ,my question is those formulas still valid if there are many threads (suppose 500 threads and each doing the CPU intensive operation such as infinite loop operation) running in each container? That's to say if the yarn cgroups still can control CPU usage if the number of threads far more that the number of vcores of each container?
"In hard limit mode, YARN uses the following formula to calculate the CPU limit for each container.
C: number of physical CPU cores
P: strict limit percent
V: number of vcores on a single node manager
A: number of vcores for a single container
CPU_LIMIT = C * P * A / V;
In our example, a single container requests one vcore. The CPU limit for this container will be
CPU_LIMIT = C * P * A / V = 8 * 50% * 1 / 6 = 2/3 = 66.7%
And if there is another container requesting 2 vcores, the CPU limit for that container will be
CPU_LIMIT = C * P * A / V = 8 * 50% * 2 / 6 = 4/3 = 133%"
I've been reading Computer Organization and Design by Patterson and Hennessy and stumbled upon an exercise with three given solutions. I can't find which is the correct one. I tried calculating with the performance equation given in the book:
CPU Execution time = (Instruction count * CPI) / Clock rate
but it doesn't work. Here's the question:
A given application written in Java runs 15 seconds on a desktop processor.
A new Java compiler is released that requires only 0.6 as many instructions as the old compiler.
Unfortunately, it increases the CPI by 1.1.
How fast can we expect the application to run using this new compiler?
Pick the right answer from the three choices below:
a. (15 * 0.6) / 1.1 = 8.2 sec
b. 15 * 0.6 * 1.1 = 9.9 sec
c. (15 * 1.1) / 0.6 = 27.5 sec
Some insights on the correct answer and why it is obtained using that particular formula would be helpful. Thanks!
new instruction count = old instruction count * 0.6
new CPI = old CPI * 1.1
Now substitute and you will arrive at solution b.
A: 15 seconds = InsA * CPIA * ClockRate
ClockRate = 15 seconds / (InsA * CPIA)
B: TimeB = (0.6*InsA) * (1.1*CPIA) * ClockRate
TimeB = (0.6*InsA) * (1.1*CPIA) * 15 seconds / (InsA * CPIA)
TimeB = 0.6*1.1*15 seconds = 9.9 seconds
I did not know whether I should post this in mathSE or stackoverflow, but since it involves code and some basic algorithms I went for SO.
My question comes from a program that I have to do based on this article:
Article
The problem is that I cannot seem to be able to allocate or understand some of the variables and how they fit, I personally think this is very sloppy mathematics and some rigorous stats would have probably benefited this article, but that's just me.
Anyway this is my pseudo-code/algorithm for the computation and it works:
/* Algorithm
*
* 1 Avg amount of sales - cupon face value
* 85 - 75 = 10 Additional $
*
* 2 Nbr cupons sold * redemption percentage (percentage Of Cupons Sold)
* 3000 * 85 = 2550 Number of tickets redemeed
*
* 3 Nbr cupons sold * sale price * percent taken by groupon
* 3000 * 35 * .50 = 52500 Groupon money limit goal
*
* 4 Nbr of tickets redeemed * Additional $
* 2550 * 10 = 25500 Additional money spent by customer
*
*
* 5 additional money spent by customer + grupon money limit
* 25500 + 52500 = 78000 Gross income
*
* Expenses
*
* 6 Nbr of tickets redeemed * avg amount sold * percent of incremental Cost Sales
* 2550 * 85 * 40 = 86700 Total expense
*
* 7 Nbr of tickets redeemed / Avg amount of cupons purchased by customers (number cupons purchased by custormers)
* 2550 / 2 = 1275 Nbr customers
*
* 8 Nbr customers * percent of existing customers (cuponsUsersAlreadyCustomers)
* 1275 * 0.60 = 765 amount of new customer (Standard deviation of average customer per population)
*
* 9 SD of avg customer per population * Percentage of new customer who returned (percent cupon user who become customers)
* 765 * 0.10 = 76.5 new repeat customer avg
*
* 10 Net cost / Avg new repeat customer
* 8700 / 76 = 114 Amount paid for each new regular
*
*/
The question is, where the heck that 60% comes from? and is it a fixed value? I mean technically 40% + 10% is 50% and 40% is the old customers. Second what about:
"7. What is the advertising value of having your business promoted to 900,000 people — that’s the number on Groupon’s Chicago list — even if they don’t buy a coupon? $1,000 advertising value."
Why do I need that? I mean I am already comparing how much each new customer will cost me with Groupon and traditional advertisement why is that there? do I need it in part of my computation?
It's a good project but this is really weird how the guy in the document is explaining the math!
The 60% comes from the assumption "4. 40 percent used by existing customers." Implicit seems to be the assumption that the "average number of coupons bought by each customer" does not differ significantly between new and existing customers. This is not mentioned explicitly, but since 2,550 is the number of redeemed coupons and the percentage is multiplied by 2,550 / 2 (assumed numbers of customers associated with these coupons) this seems to be a necessary assumption.
Edit: Sorry, I overlooked your second question. The $1,000 is mentioned only in the Revenue but not included in the calculation of the cost. In theory you could subtract it from the cost, but this is only sensible if you'd have spent that money on advertising anyways and it could thus be considered a cost external to the deal. It is however prudent to simply mention this additional benefit (which you get in addition to the new customers) but still consider it as part of the cost since it definitely has to be paid for.
So I was told to ask this on here instead of StackExchage:
If I have a program P, which runs on a 2GHz machine M in 30seconds and is optimized by replacing all instances of 'raise to the power 4' with 3 instructions of multiplying x by. This optimized program will be P'. The CPI of multiplication is 2 and CPI of power is 12. If there are 10^9 such operations optimized, what is the percent of total execution time improved?
Here is what I've deduced so far.
For P, we have:
time (30s)
CPI: 12
Frequency (2GHz)
For P', we have:
CPI (6) [2*3]
Frequency (2GHz)
So I need to figure our how to calculate the time of P' in order to compare the times. But I have no idea how to achieve this. Could someone please help me out?
Program P, which runs on a 2GHz machine M in 30 seconds and is optimized by replacing all instances of 'raise to the power 4' with 3 instructions of multiplying x by. This optimized program will be P'. The CPI of multiplication is 2 and CPI of power is 12. If there are 10^9 such operations optimized,
From this information we can compute time needed to execute all POWER4 ("raise to the power 4) instructions, we have total count of such instructions (all POWER4 was replaced, count is 10^9 or 1 G). Every POWER4 instruction needs 12 clock cycles (CPI = clock per instruction), so all POWER4 were executed in 1G * 12 = 12G cycles.
2GHz machine has 2G cycles per second, and there are 30 seconds of execution. Total P program execution is 2G*30 = 60 G cycles (60 * 10^9). We can conclude that P program has some other instructions. We don't know what instructions, how many executions they have and there is no information about their mean CPI. But we know that time needed to execute other instructions is 60 G - 12 G = 48 G (total program running time minus POWER4 running time - true for simple processors). There is some X executed instructions with Y mean CPI, so X*Y = 48 G.
So, total cycles executed for the program P is
Freq * seconds = POWER4_count * POWER4_CPI + OTHER_count * OTHER_mean_CPI
2G * 30 = 1G * 12 + X*Y
Or total running time for P:
30s = (1G * 12 + X*Y) / 2GHz
what is the percent of total execution time improved?
After replacing 1G POWER4 operations with 3 times more MUL instructions (multiply by) we have 3G MUL operations, and cycles needed for them is now CPI * count, where MUL CPI is 2: 2*3G = 6G cycles. X*Y part of P' was unchanged, and we can solve the problem.
P' time in seconds = ( MUL_count * MUL_CPI + OTHER_count * OTHER_mean_CPI ) / Frequency
P' time = (3G*2 + X*Y) / 2GHz
Improvement is not so big as can be excepted, because POWER4 instructions in P takes only some part of running time: 12G/60G; and optimization converted 12G to 6G, without changing remaining 48 G cycles part. By halving only some part of time we get not half of time.