With VirutalAlloc() you can specify MEM_WRITE_WATCH. Is it possible to establish a write watch after allocation on pages that were previously committed without MEM_WRITE_WATCH ?
Related
Assume that I've connected 3 times to the database with the same user from different PCs. Does Oracle create separate PGA areas for each of them, or just one? If one, how it handles multiple queries coming from different sessions connected by the same user, and executed at the same time?
Each session (assuming you're using dedicated sessions) allocates separate memory in PGA for things like sorts. It doesn't matter whether those sessions come from 1 user or 100 users. Each session gets its own memory.
Answering your questions
Does Oracle create separate PGA areas for each of them, or just one?
The Program Global Area or PGA is an area of memory allocated and private for one process. The configuration of the PGA depends on the connection configuration of the Oracle database: either shared server or dedicated.
In a shared server configuration, multiple users share a connection to the database, minimizing memory usage on the server, but potentially affecting response time for user requests. In a shared server environment, the SGA holds the session information for a user instead of the PGA. Shared server environments are ideal for a large number of simultaneous connections to the database with infrequent or short-lived requests. In a dedicated server environment, each user process gets its own connection to the database; the PGA contains the session memory for this configuration. The PGA also includes a sort area. The sort area is used whenever a user request requires a sort, bitmap merge, or hash join operation.
Therefore, the answer is yes, assuming you are not using shared server configuration.
If one, how it handles multiple queries coming from different sessions connected by the same user, and executed at the same time?
In a SHARED SERVER configuration, the SGA holds the session information for a user instead of the PGA. That is precisely the point for handling multiple connections using the same server process. Shared server tasks have to keep these working areas in the SGA, because all the dispatcher processes handle requests from any user process.
Reading this article: http://go-database-sql.org/accessing.html
It says that the sql.DB object is designed to be long-lived and that we should not Open() and Close() databases frequently. But what should I do if I have 10 different MySQL servers and I have sharded them in a way that I have 511 databases in each server for example the way Pinterest shards their data with MySQL?
https://medium.com/#Pinterest_Engineering/sharding-pinterest-how-we-scaled-our-mysql-fleet-3f341e96ca6f
Then would I not need to constantly access new nodes with new databases all the time? As I understand then I have to Open and Close the database connection all the time depending on which node and database I have to access.
It also says that:
If you don’t treat the sql.DB as a long-lived object, you could
experience problems such as poor reuse and sharing of connections,
running out of available network resources, or sporadic failures due
to a lot of TCP connections remaining in TIME_WAIT status. Such
problems are signs that you’re not using database/sql as it was
designed.
Will this be a problem? How should I solve this issue then?
I am also interested in the question. I guess there could be such solution:
Minimize number of idle connection in pool db.SerMaxIdleConns(N)
Make map[serverID]*sql.DB. When you have no such connection - add it to map.
Make Dara more local - so backends usually go to “their” databases. However Pinterest seems not to use it.
Increase number of sockets and files on backend machines so they can keep more open connections.
Provide some reasonable idle timeout so very old unused connections could be closed.
I am using Infinispan 6.0.2 via the Wildfly 8.2 sub-system. I have configured a transactional cache that uses a String Based JDBC Cache Store to persist content placed in the infinispan cache.
My concern is that after reading the following in the Infinispan documentation that there is potential for the cache and cache store to become out of sync when putting/updating/removing multiple entries into the cache in the same transaction due to the transaction committing/rolling-back in the cache but only partial succeeding/failing in the cache store.
4.5. Cache Loaders and transactional caches
When a cache is transactional and a cache loader is present, the cache loader won’t be enlisted in the transaction in which the cache is part. That means that it is possible to have inconsistencies at cache loader level: the transaction to succeed applying the in-memory state but (partially) fail applying the changes to the store. Manual recovery would not work with caches stores.
Could some one please clarify if the above statement only refers to loading from a cache store if it also refers to writing to a store as well.
If this is also the case when writing to a cache store are there any recommended strategies/solutions for ensuring a cache and cache store remain in sync?
The driving factors behind this for me is that I am using Infinispan both for write-through and over-flow of business critical data and need confidence that the cache store correctly represents the state of the data.
I have also asked this question on the Infinispan Forums
Many thanks in advance.
It applies to writes as well, failure to write to the store does not affect rest of the transaction.
The reason for this is that the actual persistence API is not transactional (edit: newer versions of Infinispan support transactional persistence, too). Therefore, with 2-phase commits (in first phase - prepare - all locks are acquired, in second one - commit - the write is executed) the write to the store is executed in the second phase. Therefore, the failure cannot rollback changes on different nodes.
Although Infinispan is trying to get close to strongly consistent in-memory database, it is still rather a cache given the guarantees. If you are more interested in the design limitations (and some of them also theoretical limitations), I recommend reading Infinispan wiki.
i know when using preparestatement the statement will be cache for the next time use , but i've read the docs which said the Cache is made per connection , from my understanding this means that each connection maintain it's own Cache . i.e. Connection A can't use the statement cached in Connection B even those two connections are in the same connection pool .
i'm wondering why can't the connection pool manage the Cache for all connections in it , so the statement could be reused by all connections.
my question : am i right about this ? or i just misunderstand this. and if i'm right , how about my wondering mentioned above . can it be implemented that way ?
A statement handle is - usually - linked to the physical connection that created it (not only at the JDBC side, but also at the database side). It is also deleted/closed/disposed when the connection is closed. As it is linked to the connection, the handle can't be used from a different connection, therefor the statement cache - if any - is per connection.
Even if this were technically possible, there could be additional problems (eg privilege leaks if connection have different rights, etc).
I am building a questionnaire, and i want to save the number of questions the user has completed and the number of questions available. I know how to calculate these values, but I don't want to run through the database, every time the user asks for a page that shows this information (which is quite often).
I considered saving it in the Session, but the problem is that the Session expires before the Authentication, so the information might get lost, but the user will still be logged in.
Any suggestions?
EDIT: I forgot to mention that i am working on a server, where i cannot specify the session timeout myself. Also, the number of answered questions has to be updated, when a user answers a question.
session's generally are extended each time a request is made to the server, so while someone is using the questionnaire, periodically send a request to the server using JavaScript just to ensure the session isn't ended.
You could consider increasing the session timeout.
A very interesing resource about caching and ASP.NET is http://msdn.microsoft.com/en-us/magazine/gg650661.aspx
As you can't specify the session timeout use the Cache class - it supports several different mechanisms for when to discard the cached item...
EDIT - as per comment:
With a cache you should implement a "write-through"-scheme - i.e. always update both, the cache and the DB which means the Cache is always right and any read never hits the DB after the initial load on application startup.
Another option is to update the DB and invalidate the cache item... on next read access you get a "cache miss" and handle that by hitting the DB and saving the result into the Cache... this way you hit the DB for reads in worstcase as often as for writes... this pattern only helps if you have much more reads than writes...