How can I find out how many GDI objects my process is allowed to create? - windows

There's a registry key where I can check (& set) the currently set GDI object quota for processes. However, if a user changes that registry key, the value remains the old value until a reboot occurs. In my program, I need to know if there's a way to determine, programatically, how many more GDI objects I can create. Is there an API for getting GDI information for the current process? What about at the system level?

Always hard to prove the definite absence of an API, but this one is a 95% no-go. Lots of system settings are configured through the registry without an API to tweak it afterward.
Raymond Chen's typical response to questions like these is "if you want to know then you are doing something wrong". It applies here, the default quota of 10,000 handles is enormous.

If you want to find the current quota that matters to you, create GDI objects until that fails. Record that number. Then, destroy all of them.
If you feel like doing this on a regular basis to get an accurate number, you can do so. It's probably going to be fairly expensive though.

Since Hans mentioned Raymond already, we should play his "Imagine if this were true" game. If this API - GetGDIObjectLimit or whatever - existed, what would it return? If the object count limit is 10000, then you expect it to return that right? So what happens when the system is low on memory? The API tells you a value which has no actual meaning. If you're getting close to 10000 GDI objects, you are doing something wrong and you should concentrate on fixing that.

Related

How to deactivate safe mode in the mongo shell?

Short question is on the title: I work with my mongo Shell wich is in safe mode by default, and I want to gain better performance by deactivating this behaviour.
Long Question for those willing to know the context:
I am working on a huge set of data like
{
_id:ObjectId("azertyuiopqsdfghjkl"),
stringdate:"2008-03-08 06:36:00"
}
and some other fields and there are about 250M documents like that (whole database with the indexes weights 36Go). I want to convert the date in a real ISODATE field. I searched a bit how I could make an update query like
db.data.update({},{$set:{date:new Date("$stringdate")}},{multi:true})
but did not find how to make this work and resolved myself to make a script that take the documents one after the other and make an update to set a new field which takes the new Date(stringdate) as its value. The query use the _id so the default index is used.
Problem is that it takes a very long time. I already figured out that if only I had inserted empty dates object when I created the database I would now get better performances since there is the problem of data relocation when a new field is added. I also set an index on a relevant field to process the database chunk by chunk. Finally I ran several concurrent mongo clients on both the server and my workstation to ensure that the limitant factor is the database lock availability and not any other factor like cpu or network costs.
I monitored the whole thing with mongotop, mongostats and the web monitoring interfaces which confirmed that write lock is taken 70% of the time. I am a bit disappointed mongodb does not have a more precise granularity on its write lock, why not allowing concurrent write operations on the same collection as long as there is no risk of interference? Now that I think about it I should have sharded the collection on a dozen shards even while staying on the same server, because there would have been individual locks on each shard.
But since I can't do a thing right now to the current database structure, I searched how to improve performance to at least spend 90% of my time writing in mongo (from 70% currently), and I figured out that since I ran my script in the default mongo shell, every time I make an update, there is also a getLastError() which is called afterwards and I don't want it because there is a 99.99% chance of success and even in case of failure I can still make an aggregation request after the end of the big process to retrieve the single exceptions.
I don't think I would gain so much performance by deactivating the getLastError calls, but I think itis worth trying.
I took a look at the documentation and found confirmation of the default behavior, but not the procedure for changing it. Any suggestion?
I work with my mongo Shell wich is in safe mode by default, and I want to gain better performance by deactivating this behaviour.
You can use db.getLastError({w:0}) ( http://docs.mongodb.org/manual/reference/method/db.getLastError/ ) to do what you want but it won't help.
This is because for one:
make a script that take the documents one after the other and make an update to set a new field which takes the new Date(stringdate) as its value.
When using the shell in a non-interactive mode like within a loop it doesn't actually call getLastError(). As such downing your write concern to 0 will do nothing.
I already figured out that if only I had inserted empty dates object when I created the database I would now get better performances since there is the problem of data relocation when a new field is added.
I did tell people when they asked about this stuff to add those fields incase of movement but instead they listened to the guy who said "leave them out! They use space!".
I shouldn't feel smug but I do. That's an unfortunately side effect of being right when you were told you were wrong.
mongostats and the web monitoring interfaces which confirmed that write lock is taken 70% of the time
That's because of all the movement in your documents, kinda hard to fix that.
I am a bit disappointed mongodb does not have a more precise granularity on its write lock
The write lock doesn't actually denote the concurrency of MongoDB, this is another common misconception that stems from the transactional SQL technologies.
Write locks in MongoDB are mutexs for one.
Not only that but there are numerous rules which dictate that operations will subside to queued operations under certain circumstances, one being how many operations waiting, another being whether the data is in RAM or not, and more.
Unfortunately I believe you have got yourself stuck in between a rock and hard place and there is no easy way out. This does happen.

What's the best erlang approach to being able to identify a processes identity from its process id?

When I'm debugging, I'm usually looking at about 5000 processes, each of which could be one of about 100 gen_servers, fsms, etc. If I want to know WHAT an erlang process is, I can do:
process_info(pid(0,1,0), initial_call).
And get a result like:
{initial_call,{proc_lib,init_p,5}}
...which is all but useless.
More recently, I hit upon the idea (brace yourselves) of registering each process with a name that told me WHO that process represented. For example, player_1150 is the player process that represents player 1150. Yes, I end up making a couple million atoms over the course of a week-long run. (And I would love to hear comments on the drawbacks of boosting the limit to 10,000,000 atoms when my system runs with about 8GB of real memory unused, if there are any.) Doing this meant that I could, at the console of a live system, query all processes for how long their message queue was, find the top offenders, then check to see if those processes were registered and print out the atom they were registered with.
I've hit a snag with this: I'm moving processes from one node to another. Now a player process can have 3 different names; player_1158, player_1158_deprecating, player_1158_replacement. And I have to make absolutely sure I register and unregister these names with precision timing to make sure that a process is always named and that the appropriate names always exist, AND that I don't try to register a name that some dying process already holds. There is some slop room, since this is only used for console debugging of a live system Nonetheless, the moment I started feeling like this mechanism was affecting how I develop the system (the one that moves processes around) I felt like it was time to do something else.
There are two ideas on the table for me right now. An ets tables that associates process ids with their description:
ets:insert(self(), {player, 1158}).
I don't really like that one because I have to manually keep the tables clean. When a player exits (or crashes) someone is responsible for making sure that his data are removed from the ets table.
The second alternative was to use the process dictionary, storing similar information. When my exploration of a live system led me to wonder who a process is, I could just look at his process dictionary using process_info.
I realize that none of these solutions is functionally clean, but given that the system itself is never, EVER the consumer of these data, I'm not too worried about it. I need certain debugging tools to work quickly and easily, so the behavior described is not open for debate. Are there any convincing arguments to go one way or another (other than the academic "don't use the _, it's evil" canned garbage?) I'd be happy to hear other suggestions and their justifications.
You should try out gproc, it's a very convenient application for keeping process metadata.
A process can be registered with several names and you can associate arbitrary properties to a process (where the key and value can be any erlang term). Also gproc monitors the registered processes and unregisters them automatically if they crash.
If you're debugging gen_servers and gen_fsms while they're still running, I would implement the handle_info functions for these behaviors. When you send each process a {get_info, ReplyPid} tuple, the process in question can send back a term describing its own state, what it is, etc. That way you don't have to keep track of this information outside of the process itself.
Isac mentions there is already a built in way to do this

Is RegNotifyChangeKeyValue as coarse as it seems?

I've been using ReadDirectoryChangesW to monitor a particular portion of the file system. It rather nicely provides a partial pathname to the file or directory which changed along with a clue about the nature of the change. This may have spoiled me.
I also need to monitor a particular portion of the registry, but it looks as if RegNotifyChangeKeyValue is very coarse. It will tell me that something under the given key changed, but it doesn't seem to want to tell me what that something might have been. Bummer!
The portion of the registry in question is arbitrarily deep, so enumerating all the sub-keys and calling RegNotifyChangeKeyValue for each probably isn't a hot idea because I'll eventually end up having to overcome MAXIMUM_WAIT_OBJECTS. Plus I'd have to adjust the set of keys I'd passed to RegNotifyChangeKeyValue, which would be a fair amount of effort to do without enumerating the sub-keys every time, which would defeat a fair amount of the purpose.
Any ideas?
Unfortunately, yes. You probably have to cache all the values of interest to your code, and update this cache yourself whenever you get a change trigger, or else set up multiple watchers, one on each of the individual data items of interest. As you noted the second solution gets unwieldy very quickly.
If you can implement the required code in .Net you can get the same effect more elegantly via RegistryEvent and its subclasses.

Is midiOutPrepareHeader a quick call?

Does midiOutPrepareHeader, midiInPrepareHeader just setup some data fields, or does it do something that is more time intensive?
I am trying to decide whether to build and destroy the MIDIHDR's as needed, or to maintain a pool of them.
You really have only two ways to tell (without the Windows source):
1) Profile it. Depending on your findings for how long it takes, have a debug-only scoped timer that logs when it suddenly takes longer than what you think is acceptable for your application, or do your pool solution. Though the docs say not to modify the buffer once you call the prepare function, and it seems if you wanted to re-use it you may have to modify it. I'm not familiar enough with the docs to say one way or the other if your proposed solution would work.
2) Step through the assembly and see. Don't be afraid. Get the MSFT public symbols and see if it looks like it's just filling out fields or if it's doing something complicated.

Overcoming Windows User Object Handle Limit

I'm looking for advanced strategies for dealing with User Object Handle limits when building heavy-weight windows interfaces. Please explain how you overcame or bypassed this issue using SWT or direct Windows GUI APIs. The only thing I am not interested in is strategies to optimize widget usage as I have done this extensively and it does not solve the problem, only makes it less likely.
My Situation:
I have an SWT based GUI that allows for multiple sessions within the same parent shell and within each session their are 3 separate places where a list of user generated comments are displayed. As a user opens multiple sessions and pulls data that populates those lists, the number of user object handles can increase dramatically depending on the number of comments.
My current solutions:
1. I page the comments by default thereby limiting the number of comment rows in each session, but due to management demands, i also have what is effectively a "View All" button which bypasses this completely.
2. I custom draw all non-editable information in each row. This means each row utilizes only 2 object handles.
3. I created JNI calls which query the OS for the current usage and the Max usage. With this i can give indications to users that a crash is imminent. Needless to say, they ignore this warning.
First off, are you sure the problem isn't desktop heap vs. handle count? Each handle can consume a certain amount of Windows desktop heap. One USER handle may eat a lot of space, some very little. I'm suggesting this to make sure you're not chasing user handle counts when it's really something else. (google for Microsoft's dheapmon tool, it may help)
I've read that you can alter the maxes on handles by changing keys in the registry:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\
CurrentVersion\Windows\ USERProcessHandleQuota and GDIProcessHandleQuota
This could be a short term fix for users.
I'd approach this by first figuring out what 2 user handles need to be maintained for each item (like 2 for each item in a listbox?). This seems suspect. User handles are for only a few top-level Windows UI objects (Windows, menus, cursors, Window positions, icons, etc...). I don't see why your widget needs to keep 2 objects around for each item (is it an icon handle??).
If you're looking to rip the whole thing apart - this sounds like a job for a virtual-mode List-View (LVS_OWNERDATA).
You should think about using windowless controls. They are designed for precisely this situation. See "Windowless controls are not magic", by Raymond Chen
Not only top-level windows, but most native controls use one user object each. See Give Me a Handle, and I'll Show You an Object for an in-depth explanation of user- and other handle types. This also means that SWT uses at least one user handle per widget, even for a Composite.
If you truly are hitting the limit of 10000 user objects per process, and you don't have a leak, then your only option is to reduce the number of widget instances in your application. I wrote a blog article about how we did this for our application.

Resources