What process reads blocks to the buffer cache? - oracle

DBWR processes write dirty blocks from the buffer cache to the data files.
Documentation tells that blocks are read to the buffer cache before they are formed into result set . But "who" does that reading? How do you call that process?

From the overview of server processes:
Oracle Database creates server processes to handle the requests of client processes connected to the instance. A client process always communicates with a database through a separate server process.
Server processes created on behalf of a database application can perform one or more of the following tasks:
Parse and run SQL statements issued through the application, including creating and executing the query plan (see "Stages of SQL Processing")
Execute PL/SQL code
Read data blocks from data files into the database buffer cache (the DBWn background process has the task of writing modified blocks back to disk)
Return results in such a way that the application can process the information
So each dedicated or shared server process populates the buffer cache as it reads data from the disk.
Writing out the modified blocks is done through a common DBWR background process so it can be asynchronous, and can also combine changes from multiple sessions. You don't generally want your application to wait for (slow) physical disk writes when it doesn't have to; it does have to wait for data to be read though, so there wouldn't be much benefit in making it a separate background process.
You don't explicitly call that process though, it's just handled behind the scenes.

Every client's session process reads datafiles.
Therefore the formula for OS kernel limit for number of opened files contains:
#processes * #datafiles
You can also easily check it by using lsof on Linux.

Related

Writting messages to shared log file from different MPI processes independently

I have a parallel MPI program in which I would like to collect log messages for debugging to a single log text file. The length, number and timing of messages varies for each process and I cannot use the collective write_shared function. Is there a way to check if a process is currently writing to the shared file and hold the others until that action has been completed to prevent messages from being written over on another? I'm imagining functionality like a mutex pr a lock.

NiFi process groups performance (output ports)

I use NiFi process groups to simplify the view of the entire process.
However, to use process groups, we have to pass the output to an output port and then the next processor has to be fed from that process group "via" the output port.
I have noticed that I experience performance degradation when I do that. It seems that the downstream processors are waiting for the output port to send files although the files are "available" in the upstream process groups' output port.
I removed the process groups and directly connected the processors and I see a drastic improvement in the flows. Although this looks messy and unreadable (that's the purpose of using process groups).
There is no configuration available in output port and it seems like just a passthrough mecahnism(it should be) but I am not sure why is it acting as a bottleneck.
Any views or insight on this would be very helpful
1) Option that is slower: Input -----> A Process Group(Containing Input port+Extract text+Replace text+Output port) ------> Output
2) Faster performing flow: Input ------->Extract text+Replace text ------------> Output
There is a thread about this on HCC.
Some things to look into:
If there is too much in the queues, swapping may occur
Timer based microbatching is used to move data between processes groups, this in itself should not add significant overhead, but you will want to make sure that you set Maximum Timer Driven Thread Count high enough

Need to write the single logfile by different SP running in different session concurerntly in PL SQL

I want to write the single log file (which gets created on daily basis) by multiple SPs running in different session.
This is what i have done.
create or replace PKG_LOG:
procedure SP_LOGFILE_OPEN
step 1) Open the logfile:
LF_LOG := UTL_FILE.FOPEN(LV_FILE_LOC,O_LOGFILE,'A',32760);
end SP_LOGFILE_OPEN;
procedure SP_LOGFILE_write
step 1) Write the logs as per application need:
UTL_FILE.PUT_LINE(LF_LOG,'whatever i want to write');
step 2) Flush the content as i want to logs to be written in real time.
UTL_FILE.FFLUSH(LF_LOG);
end SP_LOGFILE_write;
Now whenever in any stored procedure i want to write the log first i am calling SP_LOGFILE_OPEN and then SP_LOGFILE_write(as many time as i want).
Problem is, if there are two stored procedures say SP1 and SP2. If both of them try to open it same concurrently it never throughs error or waits for another to finish. Instead it gets open in both the sessions where SP1 and SP2 is executing.
The content of SP1(if it started running first) will be completly written into logfile but content from SP2 will be partially written into logfile. SP2 starts wrtting only when SP1's execution stops. Also initial content of SP2 which it was trying to write into logfile gets lost due to FFLUSH.
As per my requirement i dont want to lose the content of second SP2 when SP1 was running.
Any suggestions please. I dont want to drop teh idea of FFLUSH as i need in real time.
Thanks.
You could use DBMS_LOCK to get a custom lock or wait until a lock is available, then do your write, then release the lock. It has to be serialized.
But this will make your concurrency problem even worse. You're basically saying that all calls to this procedure must get in a line and be processed one by one. And remember that disk I/O is slow, so your database is now only as fast as your disk.
Yours is a bad idea. Instead of writing directly to a file, simply enqueue a log message to an Oracle advanced queue and create a job running very frequently (every few seconds) to dequeue from the AQ. It's the procedure invoked by the job that actually writes to the file. This way you can synchronize different SP executions trying to log concurrently on the same file. The actual logging is made by one single SP invoked by the job.

Continuously running code in Win32 app

I have a working GUI and now need to add some code that will need to run continuously and update the GUI with data. Where should this code go? I know that it should not go into the message loop because it might block incoming messages to the window, but I'm confused on where in my window process this code could run.
You have a choice: you can use a thread and post messages back to the main thread to update the GUI (or update the GUI directly, but don't try this if you used MFC), or you can use a timer that will post you messages periodically, you then simply implement a handler for the timer and do whatever you need to there.
The thread is best for a complicated, slow process that might block. If the process of getting data is quick (and/or can be set to timeout on error) then a timer is simpler.
Have you looked into threading at all?
Typically, you would create one thread that performs the background task (in this case, reading the voltage data) and storing it into a shared buffer. The GUI thread simply reads that buffer every so often (on redraw, every 30 seconds, when the user clicks refresh, etc) and displays the data.
Your background thread runs on its own schedule, getting CPU time from the OS, and is not bound to the UI or message pump. It can use some type of timer to monitor the data source and read things in as necessary.
Now, since the threads run separately and may run at the same time, you need to make them aware of one another. This can be done with locks (look into mutexes). For example:
The monitor reads the current voltage and stores it in the buffer.
The background/monitor thread locks the buffer holding the latest sample.
The monitor copies the internal buffer to the shared one.
The monitor unlocks the buffer.
Simultaneously, but separately, the UI thread:
Gets a redraw call.
Waits for the buffer to be unlocked, then reads the value.
Draws the UI with the buffer value.
Setting up a new thread and using it, in most Windows GUI-producing languages, is pretty simple. C/++ and C# both have very simple APIs for creating a new thread and having it work on some task, you usually just need to provide a function for the thread to process. See the MSDN docs on CreateThread for a C example.
The concept of threading and locking is for the most part language-agnostic, and similar in most C-inspired languages. You'll need to have your main (in this case, probably UI) thread control the lifetime of the worker: start the worker after the UI is created, and kill it before the UI is shut down.
This approach has a little bit of overhead up front, especially if your data fetch is very simple. If your data source changes (a network request, some blocking data source, reading over actual wires from a physical sensor, etc) then you only need to change the monitor thread and the UI doesn't need to know.

Does redis persistence blocks read and write request

I am using redis and saving data to disk in certain time interval. I see normally redis read and write time is order of .2 miliseconds but I see few peeks of order of 30 milliseconds. I read redis forks a background process to write data into disk , is forking happens on same (redis use single thread to serve all requests) thread which serves read and write request.
If this is true I want a solution such that persistence would not increase latency for read and write request.
If you issue a BGSAVE, the background save will fork. The OS needs to have a lazy separate CPU thread available ofcourse, for this not to impact Redis-server's main thread. If you configure save in redis.conf, a BGSAVE is basically what happens. I would configure it to off and issue BGSAVE manually while troubleshooting.
If you issue a SAVE, saving will be syncronous, and other clients will have to wait.
See also here. You might want to skip rdb snapshotting altogether, and rely on AOF.
Also see my remark on sensitive data: SO comment. There are many ways to make sure your data is safe. Disk-persistence is only one of them.
Hope this helps, TW

Resources