High CPU load on SYN flood - linux-kernel

When being under SYN flood attack, my CPU reach to 100% in no time by the kernel proccess named ksoftirqd,
I tried so many mitigations but none solve the problem.
This is my sysctl configurations returned by the sysctl -p:
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
fs.file-max = 10000000
fs.nr_open = 10000000
net.core.somaxconn = 128
net.core.netdev_max_backlog = 2500
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_tw_reuse = 1
net.netfilter.nf_conntrack_max = 10485760
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 15
vm.swappiness = 10
net.ipv4.icmp_echo_ignore_all = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_synack_retries = 1
Even after activating the Syn cookies, the CPU stays the same,
The Listen queue of port 443 (the port under attack) is showing 512 SYN_RECV, which is the default backlog limit set by the NGINX.
Which is also wired because the SOMAXCONN is set to a much lower value than 512 (128), so how does it exceed that limit?
SOMAXCONN needs to be the upper boundary for every socket listen and its not..
I read so much and I'm confused,
As far as I understood the SOMAXCONN is the backlog size for both LISTEN and ACCECPT queues,
so what exactly is the tcp_max_syn_backlog?
And how do I calculate each queue size?
I also read that SYN cookies does not activate immediately, but only after reaching the tcp_max_syn_backlog size, is that true?
And if so, it means its value needs to be lower than the SOMAXCONN..
I tried even activating tcp_abort_on_overflow when being under attack but nothing changed,
if its true that the SYN coockies is activate on overflow, applying them togerther result what?
I have 3 gigs of ram that is using only 700MB, my only problem is the CPU load

Related

How to activate parallel execution for OMNeT++ project

In omnetpp application mFogsim and I want to execute it on parallel, when I partition the modules manually the partition id must equal to zero, otherwise if I gave any number except zero (1 for example) the omnet through error
the worked code
**Fog.router.partition-id = 0
**Fog.Broker.partition-id = 0
**Fog.user*.partition-id = 0
**Fog.ap*.partition-id = 0
**Fog.usr[*].partition-id = 0
**Fog.Fog*.partition-id = 0
**Fog.router*.partition-id = 0
**Fog.Broker*.partition-id = 0
**Fog.Internet.partition-id = 0
**Fog.Datacntr.partition-id = 0
**Fog.configurator.partition-id = 0
**Fog.radioMedium.partition-id = 0
The code that through error if I change any partition id to any number other than zero, for example
**Fog.router.partition-id = 0
**Fog.Broker.partition-id = 0
**Fog.user*.partition-id = 0
**Fog.ap*.partition-id = 0
**Fog.usr[*].partition-id = 0
**Fog.Fog*.partition-id = 0
**Fog.router*.partition-id = 0
**Fog.Broker*.partition-id = 0
**Fog.Internet.partition-id = 0
**Fog.Datacntr.partition-id = 1
**Fog.configurator.partition-id = 0
**Fog.radioMedium.partition-id = 0
the above code through the following error
Error in module (cModule) Fog (id=1) during network setup: wrong partitioning: value 1 too large for 'Fog.Datacntr' (total partitions=1)
any ideas?
additional info.
OS ubuntu 16.04
ram 32 GB
CPU 40 Logical Core
In your mFogsim.ini add:
[General]
parsim-num-partitions = 2
where 2 is the number of partitions you want to divide a network.

How to find memory and runtime used by a NuSMV model

Given a NuSMV model, how to find its runtime and how much memory it consumed?
So the runtime can be found using this command at system prompt: /usr/bin/time -f "time %e s" NuSMV filename.smv
The above gives the wall-clock time. Is there a better way to obtain runtime statistics from within NuSMV itself?
Also how to find out how much RAM memory the program used during its processing of the file?
One possibility is to use the usage command, which displays both the amount of RAM currently being used, as well as the User and the System time used by the tool since when it was started (thus, usage should be called both before and after each operation which you want to profile).
An example execution:
NuSMV > usage
Runtime Statistics
------------------
Machine name: *****
User time 0.005 seconds
System time 0.005 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 6932K
Virtual text size = 8139K
Virtual data size = 34089K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 30487K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 0
Minor page faults = 2607
Swaps = 0
Input blocks = 0
Output blocks = 0
Context switch (voluntary) = 9
Context switch (involuntary) = 0
NuSMV > reset; read_model -i nusmvLab.2018.06.07.smv ; go ; check_property ; usage
-- specification (L6 != pc U cc = len) IN mm is true
-- specification F (min = 2 & max = 9) IN mm is true
-- specification G !((((max > arr[0] & max > arr[1]) & max > arr[2]) & max > arr[3]) & max > arr[4]) IN mm is true
-- invariant max >= min IN mm is true
Runtime Statistics
------------------
Machine name: *****
User time 47.214 seconds
System time 0.284 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 270714K
Virtual text size = 8139K
Virtual data size = 435321K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 431719K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 1
Minor page faults = 189666
Swaps = 0
Input blocks = 48
Output blocks = 0
Context switch (voluntary) = 12
Context switch (involuntary) = 145

Direct Mapping Cache

Consider the cache system with the following properties:
Cache (direct mapped cache):
- Cache size 128 bytes, block size 16 bytes (24 bytes)
- Tag/Valid bits for cache blocks are as follows:
Block index - 0 1 2 3 4 5 6 7
Tag - 0 6 7 0 5 3 1 3
Valid - 1 0 0 1 0 0 0 1
Find Tag Block index, Block offset, Cache hit/miss for memory addresses - 0x7f6, 0x133.
I am not sure how to solve.
Since cache size is 128 bytes, cache has 128/16 = 8 blocks and hence block offset = 3.
Since block size is 16 bytes, block offset is 4.
Address bits are 12 for 0x7f6 = 0111 1111 0110:
Offset = (0110 >> 1) = 3
Index = 111 = 7
Tag = 01111 = f

JCS 1.3 - pre-load cache from disk

I am using Indexed disk cache and JCS 1.3. When I restart, the JCS cache does not seem to pre-load data, instead it does lazy initialization of the cache.
On startup, the stats are as below:
Region Name = triplet_set_1
HitCountRam = 0
HitCountAux = 0
---------------------------LRU Memory Cache
List Size = 0
Map Size = 0
Put Count = 0
Hit Count = 0
Miss Count = 0
---------------------------Indexed Disk Cache
Is Alive = true
Key Map Size = 138832
Data File Length = 72470304
Hit Count = 0
Bytes Free = 0
Optimize Operation Count = 1
Times Optimized = 0
Recycle Count = 0
Recycle Bin Size = 0
Startup Size = 138832
Purgatory Hits = 0
Purgatory Size = 0
Working = true
Alive = false
Empty = true
Size = 0
Region Name = triplet_set_1
HitCountRam = 200
HitCountAux = 100
I was hoping to see a high map size given the fact that data file length is significant.
Thanks a lot

Maximum value of PCR

What is the maximum value of Program Clock Reference(PCR) in MPEG?
I understand that it is derived from a 27MHz clock, periodically loaded into a 42bit register.
PCR(i)=PCR_Base(i) * 300 + PCR_Ext(i)
where PCR_Base is loaded into a 33 bits register
PCR_Ext is loaded into a 9-bit register.
So, the maximum value of PCR w.r.t 27MHz clock is:
PCR = (2^33 - 1)*300 + (2^9 - 1) = 2,576,980,374,811.
=> (2,576,980,374,811/27,000,000) = 95443.7s = 1590.7 min = 26.5 hours
The register overflow happens after 26.5 hours of continuous streaming. Is this understanding correct?
PCR_ext(i) value should be 0 .. 299.
So the maximum PCR = (2^33-1)*300+299 = 2,576,980,377,599

Resources