How to remove the memory limit of a Windows Job Object? - windows

I have configured a memory limit for my process using Job Objects. Now I'd like to remove that limit. I assume I have to set JOBOBJECT_EXTENDED_LIMIT_INFORMATION.ProcessMemoryLimit to a specific value and call SetInformationJobObject. What value should I use? It's undocumented.
Since changing the process job assignment seems problematic simply breaking away from the job is not possible (at least on Windows 7).
For context here's some C# code that I am using but it seems immaterial to the question:
var limitInfo = new NativeMethods.JOBOBJECT_EXTENDED_LIMIT_INFORMATION()
{
BasicLimitInformation = new NativeMethods.JOBOBJECT_BASIC_LIMIT_INFORMATION()
{
LimitFlags = NativeMethods.LimitFlags.JOB_OBJECT_LIMIT_PROCESS_MEMORY,
},
ProcessMemoryLimit = (UIntPtr)maxProcessMemoryBytes,
};
if (!NativeMethods.SetInformationJobObject(jobHandle, NativeMethods.JOBOBJECTINFOCLASS.JobObjectExtendedLimitInformation, (IntPtr)(&limitInfo), (uint)structSize))
throw new Win32Exception();
0 seems to be rejected by the OS and -1 results in this:
-4KB as a limit. There's insufficient validation apparently.

First call QueryInformationJobObject to obtain the current limits and flags, remove the JOB_OBJECT_LIMIT_PROCESS_MEMORY flag from JOBOBJECT_EXTENDED_LIMIT_INFORMATION::BasicLimitInformation::LimitFlags and finally call SetInformationJobObject() with this data to remove the process memory limit.

Related

What is the difference when using KEYS and SCAN in a LUA script?

I am currently using the Redis, version:3.2.12 as a cache memory for my application: Spring Boot. I want to match a list of patterns and then delete them from Redis. I've opted to use LUA scripts and have come up with the following script.
local cursor='0';
local keysVar = {};
repeat
local scanResult = redis.call('SCAN', cursor, 'MATCH', ARGV[1], 'COUNT', 100);
local keys = scanResult[2];
for i = 1, #keys do
keysVar[i] = keys[i];
end;
cursor = scanResult[1];
until cursor == '0';
redis.replicate_commands()
redis.call('DEL', unpack(keysVar));
return keysVar ;
From what I've read the SCAN command was created to break up the blocking KEYS command which could present major issues when used in production. But, since I've decided to use LUA and Redis guarantees the script's atomic execution. While executing the script, all server activities are blocked during its entire runtime. Won't using KEYS and SCAN in an LUA script result in the same as both of them have a time complexity of O(N)?
So, what is the difference between using the above script vs, using
return redis.call('DEL', 'defaultKey', unpack(redis.call('KEYS', #keypattern)))
One more question. Why is the KEYS command regarded as deterministic, can't the number of keys returned change when let's say slave performs the KEYS command with a pattern? The reason given for the SCAN command to be non-deterministic is that the results returned may vary from master to slave. Can't the same be said for the KEYS too?
And since a SCAN uses a cursor, how is there a chance of the same key getting returned multiple times?
I am trying to delete a list of patterns using the Redis template.
private void clearCache(List<String> patterns) {
Resource scriptSource = new ClassPathResource("cleanup.lua");
RedisScript<String> redisScript = RedisScript.of(scriptSource, String.class);
patterns.forEach(pattern -> {
redisTemplate.execute(redisScript, Collections.emptyList(), pattern);
});
}
Is there a correct/recommended way to do so?
Won't using KEYS and SCAN in an LUA script result in the same as both of them have a time complexity of O(N)?
YES, they both have O(N) complexity. So you should NOT use either in production env.
So, what is the difference between using the above script vs, using...
The script with an unpacked keys result, might fail, if there's too many keys. Because, if I remember correctly, Redis has a limit on the byte-length of a command.
And since a SCAN uses a cursor, how is there a chance of the same key getting returned multiple times?
Because normally, the keyspace is dynamically during the scan, e.g. new key added, old key expired or removed.

Data Loss Prevention policy making OpenMsgStore fail (0x80040312)

When DLP policy is enabled, Redemption fails with the error:
"All business e-mail messages are protected based on a policy set in your organization. There was an error opening the protected e-mail message."
ulLowLevelError: 2147746578 (i.e. 0x80040312)
ulContext: 805701633 (0x30060801)
Is there any way around this?
The error occurs when trying to access the IPMRootFolder property of a Store object:
// A previous version of the code was multi-threaded, it is no longer.
Session = OutlookRpcLoader.new_RDOSession();
Session.Logon(ProfileName: profile, ShowDialog: false, NewSession: true);
var stores = Session.Stores;
var store = stores["{STORE-NAME}"];
var root = store.IPMRootFolder;
The call stack shows that Redemption.IRDOStore.get_IPMRootFolder() threw the exception.
Edit
This is seen when using Redemption version 5.22.0.5498 loaded via the RedemptionLoader class in .NET (registry-free COM).
When testing with Redemption version 5.19.0.5238 from VBScript using CreateObject(), the error doesn't occur.
Could anything have changed between v5.19 and v5.22?
First of all, you need to detect where your code is running - whether it is the foreground or background thread. I'd suggest checking the ThreadID of the process. The foreground thread has the value set to 1. All background threads will values greater than one. If it is a secondary thread you need to create a new Redemption session on a secondary thread where you are going to use and set the MAPIOBJECT property to the object retrieved from the main thread. For example, a raw sketch in VB.NET:
Dim PrimaryRDOSession As New Redemption.RDOSession()
PrimaryRDOSession.Login([...])
Dim WorkerThread as New System.Threading.Thread(AddressOf ThreadProc)
WorkerThread.Start(PrimaryRDOSession.MAPIOBJECT)
Sub ThreadProc(ByVal param as Object)
Dim ThdRDOSession As New Redemption.RDOSession()
ThdRDOSession.MAPIOBJECT = param
' do other stuff
End Sub
Don't use objects created on the main thread if you are on a secondary one. Ensure you are consistent when using the objects.
I believe this was caused by AppLocker rules blocking unsigned binaries. The resolution was to either code-sign the files or add the program to the AppLocker allow-list.

JDK 8: ConcurrentHashMap.compute seems occasionally to be allowing multiple calls to remapping function

I'm working on a highly concurrent application that uses an object cache based on a ConcurrentHashMap. My understanding of ConcurrentHashMap is that calls to the "compute" family of methods guarantee atomicity with respect to the remapping function. However, I've found what appears to be anomalous behavior: occasionally, the remapping function is called more than once.
The following snippet in my code shows how this can happen and what I have to do to work around it:
private ConcurrentMap<Integer, Object> cachedObjects
= new ConcurrentHashMap<>(100000);
private ReadWriteLock externalLock = new ReentrantReadWriteLock();
private Lock visibilityLock = externalLock.readLock();
...
public void update(...) {
...
Reference<Integer> lockCount = new Reference<>(0);
try {
newStats = cachedObjects.compute(objectId, (key, currentStats) -> {
...
visibilityLock.lock();
lockCount.set(lockCount.get() + 1);
return updateFunction.apply(objectId, currentStats);
});
} finally {
int count = lockCount.get();
if (count > 1) {
logger.debug("NOTE! visibilityLock acquired {} times!", count);
}
while (count-- > 0) {
// if locked, then unlock. The unlock is outside the compute to
// ensure the lock is released only after the modification is
// visible to an iterator created from the active objects hashmap.
visibilityLock.unlock();
}
}
...
}
Once in a great while, visibilityLock.lock() will be called more than once within the try block. The code in the finally block logs this and I do see the log message when this happens. My remapping function is mostly idempotent so, with the exception visibilityLock.lock(), it's harmless to have it called more than once. When it is, the finally block handles it by unlocking multiple times as needed.
visibilityLock is a read lock obtained from a ReentrantReadWriteLock. The point of this other lock is to ensure that another data structure outside this one cannot see the changes being made by the updateFunction until after compute returns.
Before we get side-tracked on non-issues, I'm already aware that the default implementation of ConcurrentMap.compute indicates that the remapping function can be called multiple times. However, the override (and corresponding documentation) in ConcurrentHashMap provides a guarantee of atomicity and the implementation shows this to be true (afaict).
Has anyone else run into this issue? Is it a JDK bug, or am I just doing something wrong?
I'm using:
$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
Eyeballing the JDK code:
https://github.com/bpupadhyaya/openjdk-8/blob/master/jdk/src/share/classes/java/util/concurrent/ConcurrentHashMap.java
It's apparrent that compute will not call the remappingFunction twice. That leaves three possibilities:
You don't have a ConcurrentHashMap, but something else. Check the concrete type at runtime and dump it to the log file.
You are calling compute twice. Is there any flow control in the function which may not be doing what you expect? You've removed most of the function so it's impossible for me to say.
You are calling compute once, but your remappingFunction is locking twice. Is there any flow control in the lambda which might not be doing what you think? Again, you have removed most of the function so there is nothing I can do to help.
To debug, check the lock count, at the point of locking, and if it is nonzero, dump the stack to the logfile.

Biztalk Debatched Message Value Caching

I get a file with 4000 entries and debatch it, so i dont lose the whole message if one entry has corrupting data.
The Biztalkmap is accessing an SQL server, before i debatched the Message I simply cached the SLQ data in the Map, but now i have 4000 indipendent maps.
Without caching the process takes about 30 times longer.
Is there a way to cache the data from the SQL Server somewhere out of the Map without losing much Performance?
It is not a recommendable pattern to access a database in a Map.
Since what you describe sounds like you're retrieving static reference data, another option is to move the process to an Orchestration where the reference data is retrieved one time into a Message.
Then, you can use a dual input Map supplying the reference data and the business message.
In this patter, you can either debatch in the Orchestration or use a Sequential Convoy.
I would always avoid accessing SQL Server in a map - it gets very easy to inadvertently make many more calls than you intend (whether because of a mistake in the map design or because of unexpected volume or usage of the map on a particular port or set of ports). In fact, I would generally avoid making any kind of call in a map that has to access another system or service, but if you must, then caching can help.
You can cache using, for example, MemoryCache. The pattern I use with that generally involves a custom C# library where you first check the cache for your value, and if there's a miss you check SQL (either for the paritcular entry or the entire cache, e.g.:
object _syncRoot = new object();
...
public string CheckCache(string key)
{
string check = MemoryCache.Default.Get(key) as string;
if (check == null)
{
lock (_syncRoot)
{
// make sure someone else didn't get here before we acquired the lock, avoid duplicate work
check = MemoryCache.Default.Get(key) as string;
if (check != null) return check;
string sql = #"SELECT ...";
using (SqlConnection conn = new SqlConnection(connStr))
{
conn.Open();
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandText = sql;
cmd.Parameters.AddWithValue(...);
// ExecuteScalar or ExecuteReader as appropriate, read values out, store in cache
// use MemoryCache.Default.Add with sensible expiration to cache your data
}
}
}
}
else
{
return check;
}
}
A few things to keep in mind:
This will work on a per AppDomain basis, and pipelines and orchestrations run on separate app domains. If you are executing this map in both places, you'll end up with caches in both places. The complexity added in trying to share this accross AppDomains is probably not worth it, but if you really need that you should isolate your caching into something like a WCF NetTcp service.
This will use more memory - you shouldn't just throw everything and anything into a cache in BizTalk, and if you're going to cache stuff make sure you have lots of available memory on the machine and that BizTalk is configured to be able to use it.
The MemoryCache can store whatever you want - I'm using strings here, but it could be other primitive types or objects as well.

Rate limiting algorithm for throttling request

I need to design a rate limiter service for throttling requests.
For every incoming request a method will check if the requests per second has exceeded its limit or not. If it has exceeded then it will return the amount of time it needs to wait for being handled.
Looking for a simple solution which just uses system tick count and rps(request per second). Should not use queue or complex rate limiting algorithms and data structures.
Edit: I will be implementing this in c++. Also, note I don't want to use any data structures to store the request currently getting executed.
API would be like:
if (!RateLimiter.Limit())
{
do work
RateLimiter.Done();
}
else
reject request
The most common algorithm used for this is token bucket. There is no need to invent a new thing, just search for an implementation on your technology/language.
If your app is high avalaible / load balanced you might want to keep the bucket information on some sort of persistent storage. Redis is a good candidate for this.
I wrote Limitd is a different approach, is a daemon for limits. The application ask the daemon using a limitd client if the traffic is conformant. The limit is configured on the limitd server and the app is agnostic to the algorithm.
since you give no hint of language or platform I'll just give out some pseudo code..
things you are gonna need
a list of current executing requests
a wait to get notified where a requests is finished
and the code can be as simple as
var ListOfCurrentRequests; //A list of the start time of current requests
var MaxAmoutOfRequests;// just a limit
var AverageExecutionTime;//if the execution time is non deterministic the best we can do is have a average
//for each request ether execute or return the PROBABLE amount to wait
function OnNewRequest(Identifier)
{
if(count(ListOfCurrentRequests) < MaxAmoutOfRequests)//if we have room
{
Struct Tracker
Tracker.Request = Identifier;
Tracker.StartTime = Now; // save the start time
AddToList(Tracker) //add to list
}
else
{
return CalculateWaitTime()//return the PROBABLE time it will take for a 'slot' to be available
}
}
//when request as ended release a 'slot' and update the average execution time
function OnRequestEnd(Identifier)
{
Tracker = RemoveFromList(Identifier);
UpdateAverageExecutionTime(Now - Tracker.StartTime);
}
function CalculateWaitTime()
{
//the one that started first is PROBABLY the first to finish
Tracker = GetTheOneThatIsRunnigTheLongest(ListOfCurrentRequests);
//assume the it will finish in avg time
ProbableTimeToFinish = AverageExecutionTime - Tracker.StartTime;
return ProbableTimeToFinish
}
but keep in mind that there are several problems with this
assumes that by returning the wait time the client will issue a new request after the time as passed. since the time is a estimation, you can not use it to delay execution, or you can still overflow the system
since you are not keeping a queue and delaying the request, a client can be waiting for more time that what he needs.
and for last, since you do not what to keep a queue, to prioritize and delay the requests, this mean that you can have a live lock, where you tell a client to return later, but when he returns someone already took its spot, and he has to return again.
so the ideal solution should be a actual execution queue, but since you don't want one.. I guess this is the next best thing.
according to your comments you just what a simple (not very precise) requests per second flag. in that case the code can be something like this
var CurrentRequestCount;
var MaxAmoutOfRequests;
var CurrentTimestampWithPrecisionToSeconds
function CanRun()
{
if(Now.AsSeconds > CurrentTimestampWithPrecisionToSeconds)//second as passed reset counter
CurrentRequestCount=0;
if(CurrentRequestCount>=MaxAmoutOfRequests)
return false;
CurrentRequestCount++
return true;
}
doesn't seem like a very reliable method to control whatever.. but.. I believe it's what you asked..

Resources