How to find akka optimal configs in Play Framework 2.8 - performance

Here are my current configuration on our test ENV which is having 8 vCPU and 32 gigs ram
play.server.provider = "play.core.server.AkkaHttpServerProvider"
akka {
loglevel = "INFO"
log-config-on-start = on
actor.default-dispatcher {
executor = fork-join-executor
fork-join-executor {
parallelism-min = 8
parallelism-factor = 16.0
parallelism-max = 64
task-peeking-mode = "FIFO"
}
}
}
How can we calculate optimal values for Production servers ? (16 vCPU and 64 gigs memory)

Related

Why a 'for loop' inside a 'parallel for loop' takes longer than the same 'for loop' in a serial region?

I am testing the performance of a cluster where I am using 64 threads. I have written a simple code:
unsigned int m(67000);
double start_time_i(0.),end_time_i(0.),start_time_init(0.),end_time_init(0.),diff_time_i(0.),start_time_j(0.),end_time_j(0.),diff_time_j(0.),total_time(0.);
cout<<"omp_get_max_threads : "<<omp_get_max_threads()<<endl;
cout<<"omp_get_num_procs : "<<omp_get_num_procs()<<endl;
omp_set_num_threads(omp_get_max_threads());
unsigned int dim_i=omp_get_max_threads();
unsigned int dim_j=dim_i*m;
std::vector<std::vector<unsigned int>> vector;
vector.resize(dim_i, std::vector<unsigned int>(dim_j, 0));
start_time_init = omp_get_wtime();
for (unsigned int j=0;j<dim_j;j++){
vector[0][j]=j;
}
end_time_init = omp_get_wtime();
start_time_i = omp_get_wtime();
#pragma omp parallel for
for (unsigned int i=0;i<dim_i;i++){
start_time_j = omp_get_wtime();
for (unsigned int j=0;j<dim_j;j++) vector[i][j]=i+j;
end_time_j = omp_get_wtime();
cout<<"i "<<i<<" thread "<<omp_get_thread_num()<<" int_time = "<<(end_time_j-start_time_j)*1000<<endl;
}
end_time_i = omp_get_wtime();
cout<<"time_final = "<<(end_time_i-start_time_i)*1000<<endl;
cout<<"initial non parallel region "<< " time = "<<(end_time_init-start_time_init)*1000<<endl;
return 0;
I do not understand why "(end_time_j-start_time_j)*1000" is much bigger (around 50) than the time I need to go through the same loop over j if I am outside from the parallel region, i.e "end_time_init-start_time_init" (around 1).
omp_get_max_threads() and omp_get_num_procs() are both equal to 64.
In your loop you just fill a memory location with a lot of values. This task is not computation expensive, it depends on the speed of memory write. One thread can do it at a certain rate, but when you use N threads simultaneously, the total memory bandwidth remains the same on Shared-Memory Multicore systems (i.e most PCs, laptops) and it increases on Distributed-Memory Multicore systems (high-end serves). For more details please read this.
So, depending on the system the speed of memory write either remains the same or decreases when running several loops concurrently. For me 50 times difference seems to be a bit large. I got the following results on compiler explorer (it means that it has to be a Distributed-Memory Multicore system):
omp_get_max_threads : 4
omp_get_num_procs : 2
i 2 thread 2 int_time = 0.095537
i 0 thread 0 int_time = 0.084061
i 1 thread 1 int_time = 0.099578
i 3 thread 3 int_time = 0.10519
time_final = 0.868523
initial non parallel region time = 0.090862
On my laptop I got the following (so it is a shared-memory multicore system):
omp_get_max_threads : 8
omp_get_num_procs : 8
i 7 thread 7 int_time = 0.7518
i 5 thread 5 int_time = 1.0555
i 1 thread 1 int_time = 1.2755
i 6 thread 6 int_time = 1.3093
i 2 thread 2 int_time = 1.3093
i 3 thread 3 int_time = 1.3093
i 4 thread 4 int_time = 1.3093
i 0 thread 0 int_time = 1.3093
time_final = 1.915
initial non parallel region time = 0.1578
In conclusion it does depend on the system you are using...

High CPU load on SYN flood

When being under SYN flood attack, my CPU reach to 100% in no time by the kernel proccess named ksoftirqd,
I tried so many mitigations but none solve the problem.
This is my sysctl configurations returned by the sysctl -p:
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
fs.file-max = 10000000
fs.nr_open = 10000000
net.core.somaxconn = 128
net.core.netdev_max_backlog = 2500
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_tw_reuse = 1
net.netfilter.nf_conntrack_max = 10485760
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 15
vm.swappiness = 10
net.ipv4.icmp_echo_ignore_all = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_synack_retries = 1
Even after activating the Syn cookies, the CPU stays the same,
The Listen queue of port 443 (the port under attack) is showing 512 SYN_RECV, which is the default backlog limit set by the NGINX.
Which is also wired because the SOMAXCONN is set to a much lower value than 512 (128), so how does it exceed that limit?
SOMAXCONN needs to be the upper boundary for every socket listen and its not..
I read so much and I'm confused,
As far as I understood the SOMAXCONN is the backlog size for both LISTEN and ACCECPT queues,
so what exactly is the tcp_max_syn_backlog?
And how do I calculate each queue size?
I also read that SYN cookies does not activate immediately, but only after reaching the tcp_max_syn_backlog size, is that true?
And if so, it means its value needs to be lower than the SOMAXCONN..
I tried even activating tcp_abort_on_overflow when being under attack but nothing changed,
if its true that the SYN coockies is activate on overflow, applying them togerther result what?
I have 3 gigs of ram that is using only 700MB, my only problem is the CPU load

My mac reports 4 Vulkan queue families but reference says it has 1

My home computer is a late 2012 Mac Mini, it has a 3rd gen Intel IvyBridge processor (mobile model for some reason only Apple knows).
The "vulkaninfo.app" from the LunarG SDK reports 4 queue families on the integrated GPU, but public information shows that it should only have 1 one it. What's happening?
VkQueueFamilyProperties[0]:
===========================
queueFlags = GRAPHICS | COMPUTE | TRANSFER
queueCount = 1
timestampValidBits = 64
minImageTransferGranularity = (1, 1, 1)
present support = true
VkQueueFamilyProperties[1]:
===========================
queueFlags = GRAPHICS | COMPUTE | TRANSFER
queueCount = 1
timestampValidBits = 64
minImageTransferGranularity = (1, 1, 1)
present support = true
VkQueueFamilyProperties[2]:
===========================
queueFlags = GRAPHICS | COMPUTE | TRANSFER
queueCount = 1
timestampValidBits = 64
minImageTransferGranularity = (1, 1, 1)
present support = true
VkQueueFamilyProperties[3]:
===========================
queueFlags = GRAPHICS | COMPUTE | TRANSFER
queueCount = 1
timestampValidBits = 64
minImageTransferGranularity = (1, 1, 1)
present support = true
In macOS, there's no queue count or queue family count in Metal Devices (as of July 2019), so one could create 1 or any number of queues and get full utilization of the device.
However, as noted in a GitHub issue, some programs with hardwired assumptions crashes when there's only 1 queue family, so they made MoltenVK report 4 queue families to fix it (see pull request 450).

Runtime.getRuntime().maxMemory() Calculate Method

Here is the code:
System.out.println("Runtime max: " + mb(Runtime.getRuntime().maxMemory()));
MemoryMXBean m = ManagementFactory.getMemoryMXBean();
System.out.println("Non-heap: " + mb(m.getNonHeapMemoryUsage().getMax()));
System.out.println("Heap: " + mb(m.getHeapMemoryUsage().getMax()));
for (MemoryPoolMXBean mp : ManagementFactory.getMemoryPoolMXBeans()) {
System.out.println("Pool: " + mp.getName() +
" (type " + mp.getType() + ")" +
" = " + mb(mp.getUsage().getMax()));
}
Run the Code on JDK8 is :
[root#docker-runner-2486794196-0fzm0 docker-runner]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
[root#docker-runner-2486794196-0fzm0 docker-runner]# java -jar -Xmx1024M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap test.jar
Runtime max: 954728448 (910.50 M)
Non-heap: -1 (-0.00 M)
Heap: 954728448 (910.50 M)
Pool: Code Cache (type Non-heap memory) = 251658240 (240.00 M)
Pool: Metaspace (type Non-heap memory) = -1 (-0.00 M)
Pool: Compressed Class Space (type Non-heap memory) = 1073741824 (1024.00 M)
Pool: PS Eden Space (type Heap memory) = 355467264 (339.00 M)
Pool: PS Survivor Space (type Heap memory) = 1048576 (1.00 M)
Pool: PS Old Gen (type Heap memory) = 716177408 (683.00 M)
*Runtime max: 954728448 (910.50 M) *
The Runtime.maxMemory is 910.50M, I want to know how this works out
On JDK7, "Runtime.getRuntime().maxMemory()" = "-Xmx" - "Survivor"
, But it does not work on JDK8。
In JDK 8 the formula Runtime.maxMemory() = Xmx - Survivor is still fair, but the trick is how Survivor is estimated.
You haven't set the initial heap size (-Xms), and the Adaptive Size Policy is on by default. This means the heap can resize and heap generation boundaries can move in runtime. Runtime.maxMemory() estimates the amount of memory conservatively, subtracting the maximum possible survivor size from the size of New Generation.
Runtime.maxMemory() = OldGen + NewGen - MaxSurvivor
where MaxSurvivor = NewGen / MinSurvivorRatio
In your example OldGen = 683 MB, NewGen = 341 MB and MinSurvivorRatio = 3 by default. That is,
Runtime.maxMemory() = 683 + 341 - (341/3) = 910.333 MB
If you disable -XX:-UseAdaptiveSizePolicy or set the initial heap size -Xms to the same value as -Xmx, you'll see again that Runtime.maxMemory() = OldGen + Eden + Survivor.
The assumption, that the discrepancy between the reported max heap and the actual max heap stems from the survivor space, was based on empirical data, but has not been proven as intentional feature.
I expanded the program a bit (code at the end). Running this expanded program on JDK 6 with -Xmx1G -XX:-UseParallelGC gave me
Runtime max: 1037959168 (989 MiB)
Heap: 1037959168 (989 MiB)
Pool: Eden Space = 286326784 (273 MiB)
Pool: Survivor Space = 35782656 (34 MiB)
Pool: Tenured Gen = 715849728 (682 MiB)
Pool: Heap memory total = 1037959168 (989 MiB)
Eden + 2*Survivor + Tenured = 1073741824 (1024 MiB)
(Non-heap: omitted)
Here, the values match. The reported max size is equal to the sum of the heap spaces, so the sum of the reported max size and one Survivor Space’s size is equal to the result of the formula Eden + 2*Survivor + Tenured, the precise heap size.
The reason why I specified -XX:-UseParallelGC was, that the term “Tenured” of the linked answer gave me a hint about where this assumption came from. As, when I run the program on Java 6 without -XX:-UseParallelGC on my machine, I get
Runtime max: 954466304 (910 MiB)
Heap: 954466304 (910 MiB)
Pool: PS Eden Space = 335609856 (320 MiB)
Pool: PS Survivor Space = 11141120 (10 MiB)
Pool: PS Old Gen = 715849728 (682 MiB)
Pool: Heap memory total = 1062600704 (1013 MiB)
Eden + 2*Survivor + Tenured = 1073741824 (1024 MiB)
(Non-heap: omitted)
Here, the reported max size is not equal to the sum of the heap memory pools, hence the “reported max size plus Survivor” formula produces a different result. These are the same values, I get with Java 8 using default options, so your problem is not related Java 8, as even on Java 6, the values do not match when the garbage collector is different to the one used in the linked Q&A.
Note that starting with Java 9, -XX:+UseG1GC became the default and with that, I get
Runtime max: 1073741824 (1024 MiB)
Heap: 1073741824 (1024 MiB)
Pool: G1 Eden Space = unspecified/unlimited
Pool: G1 Survivor Space = unspecified/unlimited
Pool: G1 Old Gen = 1073741824 (1024 MiB)
Pool: Heap memory total = 1073741824 (1024 MiB)
Eden + 2*Survivor + Tenured = N/A
(Non-heap: omitted)
The bottom line is, the assumption that the difference is equal to the size of the Survivor Space does only hold for one specific (outdated) garbage collector. But when applicable, the formula Eden + 2*Survivor + Tenured gives the exact heap size. For the “Garbage First” collector, where the formula is not applicable, the reported max size is already the correct value.
So the best strategy is to get the max values for Eden, Survivor, and Tenured (aka Old), then check whether either of these values is -1. If so, just use Runtime.getRuntime().maxMemory(), otherwise, calculate Eden + 2*Survivor + Tenured.
The program code:
public static void main(String[] args) {
System.out.println("Runtime max: " + mb(Runtime.getRuntime().maxMemory()));
MemoryMXBean m = ManagementFactory.getMemoryMXBean();
System.out.println("Heap: " + mb(m.getHeapMemoryUsage().getMax()));
scanPools(MemoryType.HEAP);
checkFormula();
System.out.println();
System.out.println("Non-heap: " + mb(m.getNonHeapMemoryUsage().getMax()));
scanPools(MemoryType.NON_HEAP);
System.out.println();
}
private static void checkFormula() {
long total = 0;
boolean eden = false, old = false, survivor = false, na = false;
for(MemoryPoolMXBean mp: ManagementFactory.getMemoryPoolMXBeans()) {
final long max = mp.getUsage().getMax();
if(mp.getName().contains("Eden")) { na = eden; eden = true; }
else if(mp.getName().matches(".*(Old|Tenured).*")) { na = old; old = true; }
else if(mp.getName().contains("Survivor")) {
na = survivor;
survivor = true;
total += max;
}
else continue;
if(max == -1) na = true;
if(na) break;
total += max;
}
System.out.println("Eden + 2*Survivor + Tenured = "
+(!na && eden && old && survivor? mb(total): "N/A"));
}
private static void scanPools(final MemoryType type) {
long total = 0;
for(MemoryPoolMXBean mp: ManagementFactory.getMemoryPoolMXBeans()) {
if(mp.getType()!=type) continue;
long max = mp.getUsage().getMax();
System.out.println("Pool: "+mp.getName()+" = "+mb(max));
if(max != -1) total += max;
}
System.out.println("Pool: "+type+" total = "+mb(total));
}
private static String mb(long mem) {
return mem == -1? "unspecified/unlimited":
String.format("%d (%d MiB)", mem, mem>>>20);
}

Program execution taking almost same usertime on CPU as well as GPU?

The program for finding prime numbers using OpenCL 1.1 gave the following benchmarks :
Device : CPU
Realtime : approx. 3 sec
Usertime : approx. 32 sec
Device : GPU
Realtime - approx. 37 sec
Usertime - approx. 32 sec
Why is the usertime of execution by GPU not less than that of CPU? Is data/task parallelization not occuring?
System specifications :64-bit CentOS 5.3 system with two ATI Radeon 5970 graphics card + Intel Core i7 processor(12 cores)
Your kernel is rather inefficient, I have an adjusted one below for you to consider. As to why it runs better on a cpu device...
Using your algorithm, the work items take varying amounts of time to execute. They will take longer as the numbers tested grow larger. A work group on a gpu will not finish until all of its items are finished some of the hardware will be left idle until the last item is done. On a cpu, it behaves more like a loop iterating over the kernel items, so the difference in cycles needed to compute each item won't drastically affect the performance.
'A' is not used by the kernel. It should not be copied unless it is used. It looks like you wanted to test the A[i] rather then 'i' itself though.
I think the gpu would be much better at FFT-based prime calculations, or even a sieve algorithm.
{
int t;
int i = get_global_id(0);
int end = sqrt(i);
if(i%2){
B[i] = 0;
}else{
B[i] = 1; //assuming only that it should be non-zero
}
for ( t = 3; (t<=end)&&(B[i] > 0) ; t+=2 ) {
if ( i % t == 0 ) {
B[ i ] = 0;
}
}
}

Resources