Here: http://download.oracle.com/docs/html/A95907_01/diff_uni.htm#1077398
I found that on Windows Oracle is thread based, while on Unix this is process based. Why it is like that?
What's more, there are many Oracle processes http://www.adp-gmbh.ch/ora/concepts/processes/index.html regardless the system.
Why log writer and db writer are implemented as processes... and the query execution is done using threads (windows) or processes (unix).
Oracle makes use of a SGA shared memory area to store information that is (and has to be) accessible to all sessions/transactions. For example, when a row is locked, that lock is in memory (as an attribute of the row) and all the other transactions need to see it is locked.
In windows a thread cannot access another process's memory
threads cannot access memory that
belongs to another process, which
protects a process from being
corrupted by another process.
As such, in Windows Oracle must be a single process with multiple threads.
On OS's supporting the sharing of memory between processes then it is less work for Oracle to work as a multi-process architecture and leave the process management to the OS.
Oracle runs a number of background threads/processes to do work that is (or can be) asynchronous to the other processes. That way those can continue even when other processes/threads are blocked or busy.
See this answer I posted earlier on in similar vein to this question 'What is process and thread?'. Windows makes extensive use of threads in this fashion. Unlike *nix/Linux based systems which are based on threads. And see here also, this link is a direct link(which is embedded in the first link I have given) to the explanation I gave on how Linux time divisions threads and processes.
Hope this helps,
Best regards,
Tom.
Related
Shared resource is used in two application process A and in process B. To avoid race condition, decided that when executing portion of code dealing with shared resource disable context switching and again enable process switching after exiting shared portion of process.
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
Or is there any better method to avoid race condition?
Regards,
Learner
But don't know how to avoid process switching to another process, when executing shared resource part and again enable process switching after exiting shared portion of process.
You can't do this directly. You can do what you want with kernel help. For example, waiting on a Mutex, or one of the other ways to do IPC (interprocess communication).
If that's not "good enough", you could even make your own kernel driver that has the semantics you want. The kernel can move processes between "sleeping" and "running". But you should have good reasons why existing methods don't work before thinking about writing your own kernel driver.
Or is there any better method to avoid race condition?
Avoiding race conditions is all about trade-offs. The kernel has many different IPC methods, each with different characteristics. Get a good book on IPC, and look into how things like Postgres scale to many processors.
For all user space application, and vast majority of kernel code, it is valid that you can't disable context switching. The reason for this is that context switching is not responsibility of application, but operations system.
In scenario that you mentioned, you should use a mutex. All processes must follow convention that before accessing shared resource, they acquire mutex, and after they are done with accessing shared resource, they release the mutex.
Lets say an application accessing the shared resource acquired mutex, and is doing some processing of shared resource, and that operating system performed context switch, thus stopping the application from processing shared resource. OS can schedule other processes wanting to access shared resource, but they will be in waiting state, waiting for mutex to be released, and none of such processes will not do anything with shared resource. After certain number of context switches, OS will again schedule original application, that will continue processing of shared resource. this will continue until original application finally releases the mutex. And then, some other process will start accessing shared resource in orderly fashion, as designed.
If you want more authoritative and detailed explanations of whats and whys of similar scenarios, you can watch this MIT lesson, for example.
Hope this helps.
I would suggest looking into named semaphores. sem_overview (7). This will allow you to ensure mutual exclusion in your critcal sections.
I would like to reserve one core for my application. On my searches I could find dwProcessAffinityMask to limit my process to run on the cores I want. But this does not
prevent threads of other processes to run on "my" core as well.
Is there a way to disallow a specific core/processor to be used by any (system-wide) process/thread except my process/thread?
Even if it was possible to set the SystemAffinityMask, this won't help because this would also prohibit the execution of my process/thread on that processor/core.
If your goal is to ensure that your process gets to run in a timely manner, just set a high priority for your process (for instance HIGH_PRIORITY_CLASS) using SetPriorityClass. Unless the system is running other equally high-priority work (of which there is little on a typical machine), your work will get to run immediately when it's ready to execute.
I need to have multiple logins and query executions into an Oracle db, 10 users per process, 10 processes per PC.
I was thinking that I would create 10 threads, one thread per user login.
Is this feasible? Any advice is appreciated.
Very new to threads.
Update:
Thanks for all the comments and answers.
Here are some additional details:
Using Oracle 10.2, Delphi XE, and dbExpress components created on the fly.
Our design is to run 10 processes per machine and simulate 10 user-logins per process. Each login is within its own thread (actually I need to have two logins in each thread, so I am actually creating 200 sessions per machine).
For this simulation exercise, after establishing a connection, each thread retrieves a bunch of data by calling several stored procedures within a loop. For each stored procedure I create a TSQLProcedure object on the fly and close and then free it after using it. Now I am getting ORA1000 Max Cursors exceeded, which I don't understand since I close and free each sp object.
Changing the settings on the server side is out of the question. I saw some documentation that says that on the application side you can set RELEASE_CURSOR=YES. I am guessing that it's an option set at the procedure level.
Yes, it is feasible. You may need a thread for each session you need (see here for an explanation), and you have to ensure OCI is called in a thread safe way, how to do it depends on the library you use to call OCI, if you don't call OCI directly.
Yes it is feasible. Remember that the UI runs on its own thread and can't be accessed directly by the other threads. Also remember you can't share state between threads unless you secure it. This is a start. Here an example on using threads with databases and the dbGo library. I suggest you give it a try and come back if you have specific questions.
I have a massive number of shell commands being executed with root/admin priveleges through Authorization Services' "AuthorizationExecuteWithPrivileges" call. The issue is that after a while (10-15 seconds, maybe 100 shell commands) the program stops responding with this error in the debugger:
couldn't fork: errno 35
And then while the app is running, I cannot launch any more applications. I researched this issue and apparently it means that there are no more threads available for the system to use. However, I checked using Activity Monitor and my app is only using 4-5 threads.
To fix this problem, I think what I need to do is separate the shell commands into a separate thread (away from the main thread). I have never used threading before, and I'm unsure where to start (no comprehensive examples I could find)
Thanks
As Louis Gerbarg already pointed out, your question has nothing to do with threads. I've edited your title and tags accordingly.
I have a massive number of shell commands being executed with root/admin priveleges through Authorization Services' "AuthorizationExecuteWithPrivileges" call.
Don't do that. That function only exists so you can restore the root:admin ownership and the setuid mode bit to the tool that you want to run as root.
The idea is that you should factor out the code that should run as root into a completely separate program from the part that does not need to run as root, so that the part that needs root can have it (through the setuid bit) and the part that doesn't need root can go without it (through not having setuid).
A code example is in the Authorization Services Programming Guide.
The issue is that after a while (10-15 seconds, maybe 100 shell commands) the program stops responding with this error in the debugger:
couldn't fork: errno 35
Yeah. You can only run a couple hundred processes at a time. This is an OS-enforced limit.
It's a soft limit, which means you can raise it—but only up to the hard limit, which you cannot raise. See the output of limit and limit -h (in zsh; I don't know about other shells).
You need to wait for processes to finish before running more processes.
And then while the app is running, I cannot launch any more applications.
Because you are already running as many processes as you're allowed to. That x-hundred-process limit is per-user, not per-process.
I researched this issue and apparently it means that there are no more threads available for the system to use.
No, it does not.
The errno error codes are used for many things. EAGAIN (35, “resource temporarily unavailable”) may mean no more threads when set by a system call that starts a thread, but it does not mean that when set by another system call or function.
The error message you quoted explicitly says that it was set by fork, which is the system call to start a new process, not a new thread. In that context, EAGAIN means “you are already running as many processes as you can”. See the fork manpage.
However, I checked using Activity Monitor and my app is only using 4-5 threads.
See?
To fix this problem, I think what I need to do is separate the shell commands into a separate thread (away from the main thread).
Starting one process per thread will only help you run out of processes much faster.
I have never used threading before …
It sounds like you still haven't, since the function you're referring to starts a process, not a thread.
This is not about threads (at least not threads in your application). This is about system resources. Each of those forked processes is consuming at least 1 kernel thread (maybe more), some vnodes, and a number of other things. Eventually the system will not allow you to spawn more processes.
The first limits you hit are administrative limits. The system can support more, but it may causes degraded performance and other issues. You can usually raise these through various mecahanisms, like sysctls. In general doing that is a bad idea unless you have a particular (special) work load that you know will benefit from specific tweaks.
Chances are raising those limits will not fix your issues. While adjusting those limits may make you run a little longer, in order to actually fix it you need to figure out why the resources are not being returned to the system. Based on what you described above I would guess that your forked processes are never exiting.
In perfmon in Windows Server 2003, there are counter objects to get per-process processor time and memory working set statistics. The only problem is that in an environment with multiple application pools, there is no way to reliably identify the correct worker process. In perfmon, they are all called "w3wp", and if there is more than one, they are w3wp, w3wp#1, w3wp#2, and so on. Even these names are unreliable - the number depends on which one started first, and obviously changes when an app pool is recycled because the process is destroyed and restarted.
I haven't found any ASP.NET-specific counters, and for some reason, my IIS object doesn't separate instances - there's only one "global" instance.
Ultimately, I just want the "% Processor Time" and "Working Set" counters for a specific IIS App Pool. Any suggestions?
We'd always collect the stats for all the w3wp processes, and we would capture PID. This is one of the counters in the Process group.
There's a script that site in Server 2003's system32 folder called IISApp.vbs, that will list all the processes and their PIDs. You will need to run this to capture the PID's.
I'm sure there has to be a better way but this worked when we needed to do adhoc monitoring.
The w3wp instance may not appear, if the worker process is idle for a long time .
The UI interface has to be used for small course of time , so that the worker process (w3wp) can show up in the instances.