I've written a little VBScript program to query the page file usage under Windows XP (eventually 2003/2008 Server as well) but the figures I seem to be getting are bizarre.
This is the program:
Set wmi = GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\cimv2")
for i = 1 to 10
Set qry1 = wmi.ExecQuery ("Select * from Win32_PageFileSetting")
Set qry2 = wmi.ExecQuery ("Select * from Win32_PerfRawData_PerfOS_PagingFile")
initial = 0
maximum = 0
For Each obj in qry1
initial = initial + obj.InitialSize
maximum = maximum + obj.MaximumSize
Next
For Each obj in qry2
if obj.Name = "_Total" then
Wscript.Echo _
" Initial size: " & initial & _
" Maximum size: " & maximum & _
" Percent used: " & obj.PercentUsage & _
""
end if
Next
qry1 = none
qry2 = none
WScript.sleep (1000)
Next
which outputs:
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
Initial size: 1512 Maximum size: 3024 Percent used: 21354
The doco on MSDN states:
PercentUsage
Data type: uint32
Access type: Read-only
Qualifiers:
DisplayName ("% Usage")
CounterType (537003008)
DefaultScale (0)
PerfDetail (200)
Percentage of the page file instance in use. For more information,
see the PageFileBytes property in Win32_PerfRawData_PerfProc_Process.
Now that seems pretty straight-forward. Why is my 3G page file using 21000% of it's allocated space? That would be about 630G but pagefile.sys is only about 1.5G (and my entire hard disk is only 186G).
Update:
When I get the same field from Win32_PerfFormattedData_PerfOS_PagingFile, I get a more reasonable value of 5 but that still doesn't seem to coincide with Task Manager, which shows 1.06G usage out of the 3G maximum.
You can't operate with the value directly like that.
The CounterType of the ProcessUsage property is 537003008, which according to this table corresponds to the PERF_RAW_FRACTION counter. Given the formula from the second link, we end up with something like this:
" Percent used: " & ((obj.PercentUsage * 100) / obj.PercentUsage_Base) & _
Related
When being under SYN flood attack, my CPU reach to 100% in no time by the kernel proccess named ksoftirqd,
I tried so many mitigations but none solve the problem.
This is my sysctl configurations returned by the sysctl -p:
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
fs.file-max = 10000000
fs.nr_open = 10000000
net.core.somaxconn = 128
net.core.netdev_max_backlog = 2500
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_tw_reuse = 1
net.netfilter.nf_conntrack_max = 10485760
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 15
vm.swappiness = 10
net.ipv4.icmp_echo_ignore_all = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_synack_retries = 1
Even after activating the Syn cookies, the CPU stays the same,
The Listen queue of port 443 (the port under attack) is showing 512 SYN_RECV, which is the default backlog limit set by the NGINX.
Which is also wired because the SOMAXCONN is set to a much lower value than 512 (128), so how does it exceed that limit?
SOMAXCONN needs to be the upper boundary for every socket listen and its not..
I read so much and I'm confused,
As far as I understood the SOMAXCONN is the backlog size for both LISTEN and ACCECPT queues,
so what exactly is the tcp_max_syn_backlog?
And how do I calculate each queue size?
I also read that SYN cookies does not activate immediately, but only after reaching the tcp_max_syn_backlog size, is that true?
And if so, it means its value needs to be lower than the SOMAXCONN..
I tried even activating tcp_abort_on_overflow when being under attack but nothing changed,
if its true that the SYN coockies is activate on overflow, applying them togerther result what?
I have 3 gigs of ram that is using only 700MB, my only problem is the CPU load
Given a NuSMV model, how to find its runtime and how much memory it consumed?
So the runtime can be found using this command at system prompt: /usr/bin/time -f "time %e s" NuSMV filename.smv
The above gives the wall-clock time. Is there a better way to obtain runtime statistics from within NuSMV itself?
Also how to find out how much RAM memory the program used during its processing of the file?
One possibility is to use the usage command, which displays both the amount of RAM currently being used, as well as the User and the System time used by the tool since when it was started (thus, usage should be called both before and after each operation which you want to profile).
An example execution:
NuSMV > usage
Runtime Statistics
------------------
Machine name: *****
User time 0.005 seconds
System time 0.005 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 6932K
Virtual text size = 8139K
Virtual data size = 34089K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 30487K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 0
Minor page faults = 2607
Swaps = 0
Input blocks = 0
Output blocks = 0
Context switch (voluntary) = 9
Context switch (involuntary) = 0
NuSMV > reset; read_model -i nusmvLab.2018.06.07.smv ; go ; check_property ; usage
-- specification (L6 != pc U cc = len) IN mm is true
-- specification F (min = 2 & max = 9) IN mm is true
-- specification G !((((max > arr[0] & max > arr[1]) & max > arr[2]) & max > arr[3]) & max > arr[4]) IN mm is true
-- invariant max >= min IN mm is true
Runtime Statistics
------------------
Machine name: *****
User time 47.214 seconds
System time 0.284 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 270714K
Virtual text size = 8139K
Virtual data size = 435321K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 431719K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 1
Minor page faults = 189666
Swaps = 0
Input blocks = 48
Output blocks = 0
Context switch (voluntary) = 12
Context switch (involuntary) = 145
Here is the code:
System.out.println("Runtime max: " + mb(Runtime.getRuntime().maxMemory()));
MemoryMXBean m = ManagementFactory.getMemoryMXBean();
System.out.println("Non-heap: " + mb(m.getNonHeapMemoryUsage().getMax()));
System.out.println("Heap: " + mb(m.getHeapMemoryUsage().getMax()));
for (MemoryPoolMXBean mp : ManagementFactory.getMemoryPoolMXBeans()) {
System.out.println("Pool: " + mp.getName() +
" (type " + mp.getType() + ")" +
" = " + mb(mp.getUsage().getMax()));
}
Run the Code on JDK8 is :
[root#docker-runner-2486794196-0fzm0 docker-runner]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
[root#docker-runner-2486794196-0fzm0 docker-runner]# java -jar -Xmx1024M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap test.jar
Runtime max: 954728448 (910.50 M)
Non-heap: -1 (-0.00 M)
Heap: 954728448 (910.50 M)
Pool: Code Cache (type Non-heap memory) = 251658240 (240.00 M)
Pool: Metaspace (type Non-heap memory) = -1 (-0.00 M)
Pool: Compressed Class Space (type Non-heap memory) = 1073741824 (1024.00 M)
Pool: PS Eden Space (type Heap memory) = 355467264 (339.00 M)
Pool: PS Survivor Space (type Heap memory) = 1048576 (1.00 M)
Pool: PS Old Gen (type Heap memory) = 716177408 (683.00 M)
*Runtime max: 954728448 (910.50 M) *
The Runtime.maxMemory is 910.50M, I want to know how this works out
On JDK7, "Runtime.getRuntime().maxMemory()" = "-Xmx" - "Survivor"
, But it does not work on JDK8。
In JDK 8 the formula Runtime.maxMemory() = Xmx - Survivor is still fair, but the trick is how Survivor is estimated.
You haven't set the initial heap size (-Xms), and the Adaptive Size Policy is on by default. This means the heap can resize and heap generation boundaries can move in runtime. Runtime.maxMemory() estimates the amount of memory conservatively, subtracting the maximum possible survivor size from the size of New Generation.
Runtime.maxMemory() = OldGen + NewGen - MaxSurvivor
where MaxSurvivor = NewGen / MinSurvivorRatio
In your example OldGen = 683 MB, NewGen = 341 MB and MinSurvivorRatio = 3 by default. That is,
Runtime.maxMemory() = 683 + 341 - (341/3) = 910.333 MB
If you disable -XX:-UseAdaptiveSizePolicy or set the initial heap size -Xms to the same value as -Xmx, you'll see again that Runtime.maxMemory() = OldGen + Eden + Survivor.
The assumption, that the discrepancy between the reported max heap and the actual max heap stems from the survivor space, was based on empirical data, but has not been proven as intentional feature.
I expanded the program a bit (code at the end). Running this expanded program on JDK 6 with -Xmx1G -XX:-UseParallelGC gave me
Runtime max: 1037959168 (989 MiB)
Heap: 1037959168 (989 MiB)
Pool: Eden Space = 286326784 (273 MiB)
Pool: Survivor Space = 35782656 (34 MiB)
Pool: Tenured Gen = 715849728 (682 MiB)
Pool: Heap memory total = 1037959168 (989 MiB)
Eden + 2*Survivor + Tenured = 1073741824 (1024 MiB)
(Non-heap: omitted)
Here, the values match. The reported max size is equal to the sum of the heap spaces, so the sum of the reported max size and one Survivor Space’s size is equal to the result of the formula Eden + 2*Survivor + Tenured, the precise heap size.
The reason why I specified -XX:-UseParallelGC was, that the term “Tenured” of the linked answer gave me a hint about where this assumption came from. As, when I run the program on Java 6 without -XX:-UseParallelGC on my machine, I get
Runtime max: 954466304 (910 MiB)
Heap: 954466304 (910 MiB)
Pool: PS Eden Space = 335609856 (320 MiB)
Pool: PS Survivor Space = 11141120 (10 MiB)
Pool: PS Old Gen = 715849728 (682 MiB)
Pool: Heap memory total = 1062600704 (1013 MiB)
Eden + 2*Survivor + Tenured = 1073741824 (1024 MiB)
(Non-heap: omitted)
Here, the reported max size is not equal to the sum of the heap memory pools, hence the “reported max size plus Survivor” formula produces a different result. These are the same values, I get with Java 8 using default options, so your problem is not related Java 8, as even on Java 6, the values do not match when the garbage collector is different to the one used in the linked Q&A.
Note that starting with Java 9, -XX:+UseG1GC became the default and with that, I get
Runtime max: 1073741824 (1024 MiB)
Heap: 1073741824 (1024 MiB)
Pool: G1 Eden Space = unspecified/unlimited
Pool: G1 Survivor Space = unspecified/unlimited
Pool: G1 Old Gen = 1073741824 (1024 MiB)
Pool: Heap memory total = 1073741824 (1024 MiB)
Eden + 2*Survivor + Tenured = N/A
(Non-heap: omitted)
The bottom line is, the assumption that the difference is equal to the size of the Survivor Space does only hold for one specific (outdated) garbage collector. But when applicable, the formula Eden + 2*Survivor + Tenured gives the exact heap size. For the “Garbage First” collector, where the formula is not applicable, the reported max size is already the correct value.
So the best strategy is to get the max values for Eden, Survivor, and Tenured (aka Old), then check whether either of these values is -1. If so, just use Runtime.getRuntime().maxMemory(), otherwise, calculate Eden + 2*Survivor + Tenured.
The program code:
public static void main(String[] args) {
System.out.println("Runtime max: " + mb(Runtime.getRuntime().maxMemory()));
MemoryMXBean m = ManagementFactory.getMemoryMXBean();
System.out.println("Heap: " + mb(m.getHeapMemoryUsage().getMax()));
scanPools(MemoryType.HEAP);
checkFormula();
System.out.println();
System.out.println("Non-heap: " + mb(m.getNonHeapMemoryUsage().getMax()));
scanPools(MemoryType.NON_HEAP);
System.out.println();
}
private static void checkFormula() {
long total = 0;
boolean eden = false, old = false, survivor = false, na = false;
for(MemoryPoolMXBean mp: ManagementFactory.getMemoryPoolMXBeans()) {
final long max = mp.getUsage().getMax();
if(mp.getName().contains("Eden")) { na = eden; eden = true; }
else if(mp.getName().matches(".*(Old|Tenured).*")) { na = old; old = true; }
else if(mp.getName().contains("Survivor")) {
na = survivor;
survivor = true;
total += max;
}
else continue;
if(max == -1) na = true;
if(na) break;
total += max;
}
System.out.println("Eden + 2*Survivor + Tenured = "
+(!na && eden && old && survivor? mb(total): "N/A"));
}
private static void scanPools(final MemoryType type) {
long total = 0;
for(MemoryPoolMXBean mp: ManagementFactory.getMemoryPoolMXBeans()) {
if(mp.getType()!=type) continue;
long max = mp.getUsage().getMax();
System.out.println("Pool: "+mp.getName()+" = "+mb(max));
if(max != -1) total += max;
}
System.out.println("Pool: "+type+" total = "+mb(total));
}
private static String mb(long mem) {
return mem == -1? "unspecified/unlimited":
String.format("%d (%d MiB)", mem, mem>>>20);
}
What is the maximum value of Program Clock Reference(PCR) in MPEG?
I understand that it is derived from a 27MHz clock, periodically loaded into a 42bit register.
PCR(i)=PCR_Base(i) * 300 + PCR_Ext(i)
where PCR_Base is loaded into a 33 bits register
PCR_Ext is loaded into a 9-bit register.
So, the maximum value of PCR w.r.t 27MHz clock is:
PCR = (2^33 - 1)*300 + (2^9 - 1) = 2,576,980,374,811.
=> (2,576,980,374,811/27,000,000) = 95443.7s = 1590.7 min = 26.5 hours
The register overflow happens after 26.5 hours of continuous streaming. Is this understanding correct?
PCR_ext(i) value should be 0 .. 299.
So the maximum PCR = (2^33-1)*300+299 = 2,576,980,377,599
I wrote a simple moving average with a moving window of Temperatures read as a voltage between 0 and 10V.
The algorithm appears to work correctly, however, it has a problem where depending upon which Temperatures filled the window first, the moving average will have an offset for any values not near this value. For example, running this program with the temp. sensor plugged in a room temp yields 4.4V or 21.3 C. Though, if I unplug the temp. sensor the voltage drops to 1.4V yet the moving average stays at 1.6V. This offset becomes smaller as I increase the size of the window. How remove this offset even for small window sizes eg. 20 ?
REM SMA Num Must be greater than 1
#DEFINE SMANUM 20
PROGRAM
'Program 3 - Simple Moving Average Test
CLEAR
DIM SA(1)
DIM SA0(SMANUM) : REM Moving Average Window as Array
DIM LV1
DIM SV2
LV0 = 0 : REM Counter
SV0 = 0 : REM Average
SV1 = 0 : REM Sum
WHILE(1)
SA0(LV0 MOD SMANUM) = PLPROBETEMP : REM add Temperature to head of window
SV1 = SV1 + SA0(LV0 MOD SMANUM) : REM add new value to sum
IF(LV0 >= (SMANUM)) : REM check if we have min num of values
SV1 = SV1 - SA0((LV0+1) MOD SMANUM) : REM remove oldest value from sum
SV0 = SV1 / SMANUM : REM calc moving average
PRINT "Avg: " ; SV0 , " Converted: " ; SV0 * 21.875 - 75
ENDIF
LV0 = LV0 + 1 : REM increment counter
WEND
ENDP
(Note this is written in ACROBASIC for the ACR9000 by Parker)
Output - Temp Sensor attached
Raw: 4.43115 Avg: 4.41926 Converted: 21.6713125
Raw: 4.43115 Avg: 4.41938 Converted: 21.6739375
Raw: 4.43359 Avg: 4.41963 Converted: 21.67940625
Raw: 4.43359 Avg: 4.41987 Converted: 21.68465625
Raw: 4.43359 Avg: 4.42012 Converted: 21.690125
Raw: 4.43359 Avg: 4.42036 Converted: 21.695375
Raw: 4.43359 Avg: 4.42061 Converted: 21.70084375
...remove temp sensor while program is running
Raw: 1.40625 Avg: 1.55712 Converted: -40.938
Raw: 1.40381 Avg: 1.55700 Converted: -40.940625
Raw: 1.40625 Avg: 1.55699 Converted: -40.94084375
Raw: 1.40625 Avg: 1.55699 Converted: -40.94084375
Raw: 1.40381 Avg: 1.55686 Converted: -40.9436875
Raw: 1.40381 Avg: 1.55674 Converted: -40.9463125
Raw: 1.40625 Avg: 1.55661 Converted: -40.94915625
A noticeable offset appears between the raw and moving average after removing the sensor.
The offset also occurs in the reverse order:
Output - Begin program w/ Temp Sensor removed
Raw: 1.40381 Avg: 1.40550 Converted: -44.2546875
Raw: 1.40625 Avg: 1.40550 Converted: -44.2546875
Raw: 1.40625 Avg: 1.40549 Converted: -44.25490625
Raw: 1.40625 Avg: 1.40549 Converted: -44.25490625
Raw: 1.40625 Avg: 1.40548 Converted: -44.255125
Raw: 1.40625 Avg: 1.40548 Converted: -44.255125
... attach temp sensor while program is running
Raw: 4.43848 Avg: 4.28554 Converted: 18.7461875
Raw: 4.43848 Avg: 4.28554 Converted: 18.7461875
Raw: 4.43848 Avg: 4.28554 Converted: 18.7461875
Raw: 4.43848 Avg: 4.28554 Converted: 18.7461875
Raw: 4.43848 Avg: 4.28554 Converted: 18.7461875
Raw: 4.43359 Avg: 4.28530 Converted: 18.7409375
Again noticeable offset appears between the raw and moving average after attaching the sensor.
The problem seems to be that the value that was being subtracted from the sum was not actually the oldest value in the array -- the oldest value was, in fact, overwritten by the new value in the first line of the WHILE loop. It was the second-oldest value that was being subtracted from the sum.
EDIT Changed Average and Sum variable to 64-bit floating point to address precision loss over time, on the OP's advice.
Ensuring that the oldest value is subtracted first (once the array is full) gives the expected answer:
PROGRAM
'Program 3 - Simple Moving Average Test
CLEAR
DIM SA(1)
DIM SA0(SMANUM) : REM Moving Average Window as Array
DIM LV1
DIM DV2
LV0 = 0 : REM Counter
DV0 = 0 : REM Average
DV1 = 0 : REM Sum
WHILE(1)
IF(LV0 >= (SMANUM)) : REM check if we have min num of values
DV1 = DV1 - SA0(LV0 MOD SMANUM) : REM remove oldest value from sum
ENDIF
SA0(LV0 MOD SMANUM) = PLPROBETEMP : REM add Temperature to head of window
DV1 = DV1 + SA0(LV0 MOD SMANUM) : REM add new value to sum
IF(LV0 >= (SMANUM)) : REM check if we have min num of values
DV0 = DV1 / SMANUM : REM calc moving average
PRINT "Avg: " ; DV0 , " Converted: " ; DV0 * 21.875 - 75
ENDIF
LV0 = LV0 + 1 : REM increment counter
WEND
I don't have a running BASIC environment but I tested this in Python and got the same incorrect output for code equivalent to your version and the expected output for code equivalent to the version that I've inserted above.