JCS 1.3 - pre-load cache from disk - caching

I am using Indexed disk cache and JCS 1.3. When I restart, the JCS cache does not seem to pre-load data, instead it does lazy initialization of the cache.
On startup, the stats are as below:
Region Name = triplet_set_1
HitCountRam = 0
HitCountAux = 0
---------------------------LRU Memory Cache
List Size = 0
Map Size = 0
Put Count = 0
Hit Count = 0
Miss Count = 0
---------------------------Indexed Disk Cache
Is Alive = true
Key Map Size = 138832
Data File Length = 72470304
Hit Count = 0
Bytes Free = 0
Optimize Operation Count = 1
Times Optimized = 0
Recycle Count = 0
Recycle Bin Size = 0
Startup Size = 138832
Purgatory Hits = 0
Purgatory Size = 0
Working = true
Alive = false
Empty = true
Size = 0
Region Name = triplet_set_1
HitCountRam = 200
HitCountAux = 100
I was hoping to see a high map size given the fact that data file length is significant.
Thanks a lot

Related

How to activate parallel execution for OMNeT++ project

In omnetpp application mFogsim and I want to execute it on parallel, when I partition the modules manually the partition id must equal to zero, otherwise if I gave any number except zero (1 for example) the omnet through error
the worked code
**Fog.router.partition-id = 0
**Fog.Broker.partition-id = 0
**Fog.user*.partition-id = 0
**Fog.ap*.partition-id = 0
**Fog.usr[*].partition-id = 0
**Fog.Fog*.partition-id = 0
**Fog.router*.partition-id = 0
**Fog.Broker*.partition-id = 0
**Fog.Internet.partition-id = 0
**Fog.Datacntr.partition-id = 0
**Fog.configurator.partition-id = 0
**Fog.radioMedium.partition-id = 0
The code that through error if I change any partition id to any number other than zero, for example
**Fog.router.partition-id = 0
**Fog.Broker.partition-id = 0
**Fog.user*.partition-id = 0
**Fog.ap*.partition-id = 0
**Fog.usr[*].partition-id = 0
**Fog.Fog*.partition-id = 0
**Fog.router*.partition-id = 0
**Fog.Broker*.partition-id = 0
**Fog.Internet.partition-id = 0
**Fog.Datacntr.partition-id = 1
**Fog.configurator.partition-id = 0
**Fog.radioMedium.partition-id = 0
the above code through the following error
Error in module (cModule) Fog (id=1) during network setup: wrong partitioning: value 1 too large for 'Fog.Datacntr' (total partitions=1)
any ideas?
additional info.
OS ubuntu 16.04
ram 32 GB
CPU 40 Logical Core
In your mFogsim.ini add:
[General]
parsim-num-partitions = 2
where 2 is the number of partitions you want to divide a network.

High CPU load on SYN flood

When being under SYN flood attack, my CPU reach to 100% in no time by the kernel proccess named ksoftirqd,
I tried so many mitigations but none solve the problem.
This is my sysctl configurations returned by the sysctl -p:
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
fs.file-max = 10000000
fs.nr_open = 10000000
net.core.somaxconn = 128
net.core.netdev_max_backlog = 2500
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_tw_reuse = 1
net.netfilter.nf_conntrack_max = 10485760
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 15
vm.swappiness = 10
net.ipv4.icmp_echo_ignore_all = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_synack_retries = 1
Even after activating the Syn cookies, the CPU stays the same,
The Listen queue of port 443 (the port under attack) is showing 512 SYN_RECV, which is the default backlog limit set by the NGINX.
Which is also wired because the SOMAXCONN is set to a much lower value than 512 (128), so how does it exceed that limit?
SOMAXCONN needs to be the upper boundary for every socket listen and its not..
I read so much and I'm confused,
As far as I understood the SOMAXCONN is the backlog size for both LISTEN and ACCECPT queues,
so what exactly is the tcp_max_syn_backlog?
And how do I calculate each queue size?
I also read that SYN cookies does not activate immediately, but only after reaching the tcp_max_syn_backlog size, is that true?
And if so, it means its value needs to be lower than the SOMAXCONN..
I tried even activating tcp_abort_on_overflow when being under attack but nothing changed,
if its true that the SYN coockies is activate on overflow, applying them togerther result what?
I have 3 gigs of ram that is using only 700MB, my only problem is the CPU load

How to find memory and runtime used by a NuSMV model

Given a NuSMV model, how to find its runtime and how much memory it consumed?
So the runtime can be found using this command at system prompt: /usr/bin/time -f "time %e s" NuSMV filename.smv
The above gives the wall-clock time. Is there a better way to obtain runtime statistics from within NuSMV itself?
Also how to find out how much RAM memory the program used during its processing of the file?
One possibility is to use the usage command, which displays both the amount of RAM currently being used, as well as the User and the System time used by the tool since when it was started (thus, usage should be called both before and after each operation which you want to profile).
An example execution:
NuSMV > usage
Runtime Statistics
------------------
Machine name: *****
User time 0.005 seconds
System time 0.005 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 6932K
Virtual text size = 8139K
Virtual data size = 34089K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 30487K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 0
Minor page faults = 2607
Swaps = 0
Input blocks = 0
Output blocks = 0
Context switch (voluntary) = 9
Context switch (involuntary) = 0
NuSMV > reset; read_model -i nusmvLab.2018.06.07.smv ; go ; check_property ; usage
-- specification (L6 != pc U cc = len) IN mm is true
-- specification F (min = 2 & max = 9) IN mm is true
-- specification G !((((max > arr[0] & max > arr[1]) & max > arr[2]) & max > arr[3]) & max > arr[4]) IN mm is true
-- invariant max >= min IN mm is true
Runtime Statistics
------------------
Machine name: *****
User time 47.214 seconds
System time 0.284 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 270714K
Virtual text size = 8139K
Virtual data size = 435321K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 431719K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 1
Minor page faults = 189666
Swaps = 0
Input blocks = 48
Output blocks = 0
Context switch (voluntary) = 12
Context switch (involuntary) = 145

Minimization algorithm to solve a 3 equations, 2 unknowns system

As a civil engineer, I am working on a program to find the equilibrium of a concrete reinforced section submitted to a Flexural Moment.
Reinforced Concrete Cross Section Equilibrium:
Basically, I have 2 unknowns, which are eps_sup and eps_inf
I have a constant that is M
I have some variables that depend only on the values of (eps_sup,eps_inf). The functions are non-linear, no need to go into this.
When I have the right couple of values, the following equations are verified :
Fc + Fs = 0 (Forces Equilibrium)
M/z = Fc = -Fs (Moment Equilibrium)
My algorithm, as it is today, consists in finding the minimal value of : abs(Fc+Fs)/Fc + abs(M_calc-M)/M
To do this I iterate on Both e eps_sup and eps_inf between given limits, with a given step, and the step needs to be small enough to find a solution.
It is working, but it is very (very) slow since it goes through a very wide range of values without trying to reduce the number of iterations.
Surely I can find an optimized solution, and that is were I need your help.
'Constants :
M
'Variables :
delta = 10000000000000
eps_sup = 0
eps_inf = 0
M_calc = 0
Fc = 0
Fs = 0
z = 0
eps_sup_candidate = 0
eps_inf_candidate = 0
For eps_sup = 0 to 0,005 step = 0,000001
For eps_inf = -0,05 to 0 step = 0,000001
Fc = f(eps_sup,eps_inf)
Fs = g(eps_sup,eps_inf)
z = h(eps_sup,eps_inf)
M_calc = Fc * z
If (abs(Fc+Fs)/Fc + abs(M_calc-M)/M) < delta Then
delta = abs(Fc+Fs)/Fc + abs(M_calc-M)/M
eps_sup_candidate = eps_sup
eps_inf_candidate = eps_inf
End If
Next
Next

Getting poor performance while saving to Redis cache (using ServiceStack.Redis)

I am getting very poor performance while saving data to Redis cache.
Scenario :
1) Utilizing Redis cache service (provided by Microsoft Azure).
2) Running code in Virtual Machine created on Azure.
3) Both VM and Cache service are created on same Location
Code Snippet:
public void MyCustomFunction()
{
Stopwatch totalTime = Stopwatch.StartNew();
RedisEndpoint config = new RedisEndpoint();
config.Ssl = true;
config.Host = "redis.redis.cache.windows.net";
config.Password = Form1.Password;
config.Port = 6380;
RedisClient client = new RedisClient(config);
int j = 0;
for (int i = 0; i < 500; i++)
{
var currentStopWatchTime = Stopwatch.StartNew();
var msgClient = client.As<Message>();
List<string> dataToUpload = ClientData.GetRandomData();
string myCachedItem_1 = dataToUpload[1].ToString();
Random ran = new Random();
string newKey = string.Empty;
newKey = Guid.NewGuid().ToString();
Message newItem = new Message
{
Id = msgClient.GetNextSequence(), // Size : Long variable
//Id = (long)ran.Next(),
Key = j.ToString(), // Size: Int32 variable
Value = newKey, // Size : Guid string variable
Description = myCachedItem_1 // Size : 5 KB
};
string listName = ran.Next(1, 6).ToString();
msgClient.Lists[listName].Add(newItem);
//msgClient.Store(newItem);
Console.WriteLine("Loop Count : " + j++ + " , Total no. of items in List : " + listName + " are : " + msgClient.Lists[listName].Count);
Console.WriteLine("Current Time: " + currentStopWatchTime.ElapsedMilliseconds + " Total time:" + totalTime.ElapsedMilliseconds);
Console.WriteLine("Cache saved");
}
}
Performance (While Saving):
Note : (All times are in milliseconds)
Loop Count : 0 , Total no. of items in List : 2 are : 1
Current Time: 310 Total time:342
Cache saved
Loop Count : 1 , Total no. of items in List : 3 are : 1
Current Time: 6 Total time:349
Cache saved
Loop Count : 2 , Total no. of items in List : 5 are : 1
Current Time: 3 Total time:353
Cache saved
Loop Count : 3 , Total no. of items in List : 5 are : 2
Current Time: 3 Total time:356
Cache saved
Loop Count : 4 , Total no. of items in List : 5 are : 3
Current Time: 3 Total time:360
Cache saved
.
.
.
.
.
Loop Count : 330 , Total no. of items in List : 4 are : 69
Current Time: 2 Total time:7057
Cache saved
Loop Count : 331 , Total no. of items in List : 4 are : 70
Current Time: 3 Total time:7061
Cache saved
Loop Count : 332 , Total no. of items in List : 4 are : 71
Current Time: 2 Total time:7064
Cache saved
Performance (While Fetching)
List : 1
No. of items : 110
Time : 57
List : 2
No. of items : 90
Time : 45
List : 3
No. of items : 51
Time : 23
List : 4
No. of items : 75
Time : 32
List : 5
No. of items : 63
Time : 33
If you're dealing in batches you should look at reducing the number of synchronous network requests that you're making to reduce your latency which is going to be the major performance issue when communicating with network services.
For this example you're making a read when you call:
msgClient.GetNextSequence();
and a write when you make:
msgClient.Lists[listName].Add(newItem);
Which is a total of 1000 synchronous request/reply network requests in a single thread where each operation is dependent and has to complete before the next one can be sent which is why network latency is going to be a major source of performance issues which you should look at optimizing.
Batching Requests
If you're dealing with batched requests this can be optimized greatly by reducing the number of reads and writes by fetching all ids in a single request and storing them using the AddRange() batch operation, e.g:
var redisMessages = Redis.As<Message>();
const int batchSize = 500;
//fetch next 500 sequence of ids in a single request
var nextIds = redisMessages.GetNextSequence(batchSize);
var msgBatch = batchSize.Times(i =>
new Message {
Id = nextIds - (batchSize - i) + 1,
Key = i.ToString(),
Value = Guid.NewGuid().ToString(),
Description = "Description"
});
//Store all messages in a single multi operation request
redisMessages.Lists[listName].AddRange(msgBatch);
This will condense the 1000 redis operations down to 2 operations.
Than if you need to you can fetch all messages with:
var allMsgs = redisMessages.Lists[listName].GetAll();
or specific ranges using GetRange(startingFrom,endingAt) API's.

Resources