If have I have multiple processes accessing a registry value thousands of times per second, will there be any significant performance implications of reading this registry value?
The value of the registry value will never change, it will be read only. I guess another question is that is reading the registry value a blocking operation?
The registry value is for storing database connection details, accessed by an ASP.NET applications, Win Forms applications, and WCF services.
Thanks,
Stuart
The registry is fast, really fast. But thousands of times per second? At the very least, cache the value in each application so you only have to read it once on app startup.
Windows registry is just a file that happens to have more protection around it than other files.
Just like any file, however, there will be a performance hit as it is accessed.
I would suggest that you read your values once, on application startup and store them in memory, passing them to your objects as required.
Related
I'm using MemoryCache in a dotnet core C# project. I'm using it store a list of enums that I read out of a collection (I want to store it since it uses reflection to load the enums which takes time).
However, since my collection isn't going to change, I don't want it to expire at all. How do I do that? If I don't set any Expiration (SlidingExpiration or Absolute Expiration), will my cache never expire?
If you do not specify an absolute and/or sliding expiration, then the item will theoretically remain cached indefinitely. In practical terms, persistence is dependent on two factors:
Memory pressure. If the system is resource-constrained, and a running app needs additional memory, cached items are eligible to be removed from memory to free up RAM. You can however disable this by setting the cache priority for the entry to CacheItemPriority.NeverRemove.
Memory cache is process-bound. That means if you restart the server or your application restarts for whatever reason, anything stored in memory cache is gone. Additionally, this means that in web farm scenarios, each instance of your application will have it's own memory cache, since each is a separate process (even if you're simply running multiple instances on the same server).
If you care about only truly doing the operation once, and persisting the result past app shutdown and even across multiple instances of your app, you need to employ distributed caching with a backing store like SQL Server or Redis.
I found that it's possible to automatically extend Liferay's session. So that the session doesn't expire till you close your browser. Is there any limitations or disadvantages of such approach. Any performance degrade or load issues?
As with any abstract question about hypothetical performance impact (or preliminary optimization) this question is basically unanswerable - but here's some criteria:
Naturally, pinging the server in order to extend a session will incur some extra load - if that results in a performance decrease, you'll most likely have a highly congested installation in the first place. If your server is bored all day, the extra ping won't bring it down.
You may or may not have custom applications running in your installation that store data in the user's session. If those are a few bytes (like Liferay does, e.g. the currently logged in user's information): There's probably no degradation. If you store 1MB of information per session (in your own custom apps - Liferay doesn't do this), things might differ: Just multiply your session storage size by the number of concurrent users that you expect. In case this use of memory indicates a problem: Make your custom apps use the session less - it's bad style anyway.
Will your particular installation suffer from any degradation? Measure. There's no way around this.
From a system maintenance point of view: If you're running a cluster and want to take individual machines out of the load balancer: Artificially extending sessions might indicate that a machine still has sessions open, even though they're mostly on unattended browsers - you'll get inflated numbers and it takes longer to bring machines down when you need to wait for the session count to come close to zero.
We have a server that securely sends a key to the client via a custom login program. The key is subsequently used for encrypting further client requests. That key is kept on the client's disk, like a cookie, and is used by a program that might be started and stopped multiple times before the client decides to logout and cause the key to be obsolete (hence the key is saved on disk, because there may be long periods between login and logout when no program is running).
It would seem to be a bit more secure to keep the key only in memory instead of on disk (it's OK if a crash or restart loses the key and subsequently forces a new login).
On Windows, what's the best way to retain the key only in memory (ignoring that the memory might be virtual and paged to disk) between separate executions of a program?
One possible solution is to leave a trivial Windows service running on the client that accepts the key, retains it in the service's memory, and returns it upon request (or use an equivalent trivial DDE server that does the same thing). A non-.net solution is preferred.
Is there a standard Windows service usually running that already provides this ability?
Is there a better approach?
There are probably a couple of solutions you can try that does not involve a running process:
Store it in a volatile registry key (REG_OPTION_VOLATILE)
Store it in the global atom table. The key has to be stored as a string. You would probably require two atoms; one that stores the key and one used to locate the first atom so you can call GlobalGetAtomName. The second atom should have a known name like "YourAppName:S-UsersSidGoesHere" so you can call GlobalFindAtom.
If you decide to store it in a file in %temp% you could use TOKEN_STATISTICS.AuthenticationId as part of a key used to encrypt the real key. You could encrypt the file itself with EFS (FILE_ATTRIBUTE_ENCRYPTED)...
My service needs to store a few bits of information (at minimum, at least 20 bits or so, but I can easily make use of more) such that
it persists across service restarts, even if the service crashed or was otherwise terminated abnormally
it does not persist across a reboot
can be read and updated with very little overhead
If I store this information in the registry or in a file, it will not get automatically emptied when the system reboots.
Now, if I were on a modern POSIX system, I would use shm_open, which would create a shared memory segment which persists across process restarts but not system reboots, and I could use shm_unlink to clean it up if the persistent data somehow got corrupted.
I found MSDN : Creating Named Shared Memory and started reimplementing pieces of it within my service; this basically uses CreateFileMapping(INVALID_HANDLE_NAME, ..., PAGE_READWRITE, ..., "Global\\my_service") instead of shm_open("/my_service", O_RDWR, O_CREAT).
However, I have a few concerns, especially centered around the lifetime of this pagefile-backed mapping. I haven't found answers to these questions in the MSDN documentation:
Does the mapping persist across reboots?
If not, does the mapping disappear when all open handles to it are closed?
If not, is there a way to remove or clear the mapping? Doesn't need to be while it's in use.
If it does persist across reboots, or does disappear when unreferenced, or is not able to be reset manually, this method is useless to me.
Can you verify or find faults in these points, and/or recommend a different approach?
If there were a directory that were guaranteed to be cleaned out upon reboot, I could save data in a temporary file there, but it still wouldn't be ideal: under certain system loads, we are encountering file open/write failures (rare, under 0.01% of the time, but still happening), and this functionality is to be used in the logging path. I would like not to introduce any more file operations here.
The shared memory mapping would not persist across reboots and it will disappear when all of its handles are closed. A memory mapping object is a kernel object - they always get deleted when the last reference to them goes away, either explicitly via a CloseHandle or when the process containing the reference exits.
Try creating a registry key with RegCreateKeyEx with REG_OPTION_VOLATILE - the data will not preserved when the corresponding hive is unloaded. This will be at system shutdown for HKLM or user logoff for HKCU.
sounds like maybe you want serialization instead of shared memory? If that is indeed appropriate for your application, the way you serialize will depend on your language. If you're using c++, check out boost::serialize. C# undoubtedly has lots of serializations options (like java), if that's what you're using.
I've used Coldfusion sessions for quite a while, so I know how they are used, but now I need to know how they work, so that I can plan for scaling my website.
Is a Coldfusion user 'session' simply a quick method to setup 2 cookies (CFTOKEN and CFID) and an associated server side memory structure? (the SESSION scope) Does it do anything else? I'm trying to identify the overhead associated with user sessions versus other methods such as cookies.
Your understanding of them is basically correct. Although they are not bound to the cookies. The cookies are a recording of a token. That token can get passed in the url string if cookies are not enabled in the browser.
There are 2 main advantages I see of saving things in session instead of cookies:
You control the session scope. People can't edit the data in the session scope without you providing them an interface. Cookies can be modified by the client.
Complex data like structures, arrays, objects, network sessions (FTP, exchange) can be stored there.
Their memory overhead is "low" but that's a relative term. Use the ColdFusion Admin Server Monitor to drill into how much memory your sessions are actually using.
First of all, Session is scope: secure and efficient way to keep current user attributes like permissions or preferences. Not sure what do you mean under "other methods", but I doubt that you'll be able to keep complex data structures (query,object,array) in cookies.
Second, application server provides you with really handy event handlers specially for sessions: onSessionStart() and onSessionEnd().
Third, sessions can be pretty easily shared and clustered: between CF applications or between CF and J2EE.
Sessions are per-user memory space assigned within a particular application space within the jvm memory. The two cookies are pointers to (the token of) that memory space. Yes, there are overhead of using session (RAM, swap space, etc), but unless you're shoving mass amount of data inside the session scope, it shouldn't be that bad.
One aspect of sessions not mentioned is that they have a lifetime: by default 20 minutes (of inactivity). This lifetime can be set by application, but can never be more than the limit set in ColdFusion Administrator.
If memory usage is a concern the time limit could be reduced, although there's still much that depends on the Java garbage collection.