What is the difference between DBCC cachedataremove and sp_unbindcache? - caching

I would like to know the difference between DBCC cachedataremove and sp_unbindcache. because in my project i got permissions for unbindingcache but not to execute DBCC cachedataremove. I was assuming that they did pretty much the same thing.
SYBASE ASE 15.7

sp_unbindcache is used to unbind and object from a user-defined cache whereas dbcc cachedataremove is use to remove an object from default data cache.
You cannot bind objects to default data cache explicitly only to user-defined caches.

Related

Cache Coherency with Tarantool

I understand that Tarantool has ACID transactions within a stored procedure. My question is: does it also make sure in-memory data is in sync with persistent file system data? For Example, if I change 5 records using a Stored Proc and something goes wrong while writing the changes to WAL file, will the in-memory cache roll back to original values for ALL 5 records?
Also, while an update transaction is in progress, will other readers see 'dirty' uncommitted records or a consistent view of the records as these existed before transaction started?
Thanks
Tarantool has a special function for transaction control[1] inside stored procedure. But it has some limitations for instance: you can't call fiber.yield()[2] (that includes underlying calls, i.e. fio, sockets and so on) inside box.begin() box.end() section. You can find more about transaction control here: https://tarantool.io/en/doc/1.9/book/box/atomic.html?highlight=yield.
And also, tarantool supports fsync[3].
[1] https://tarantool.io/en/doc/1.9/book/box/box_txn_management.html?highlight=commit#lua-function.box.commit
[2] https://tarantool.io/en/doc/1.9/reference/reference_lua/fiber.html?highlight=yield#lua-function.fiber.yield
[3] https://tarantool.io/en/doc/1.9/reference/configuration/index.html?highlight=fsync#confval-wal_mode
It is possible only if a user uses fibers and don't control own codes. That means that is possible only if a logical error exists inside user's codes.
You are welcome.

Does the compiled prepared statement in the database driver still require compilation in the database?

In the Oracle JDBC driver, there is an option to cache prepared statements. My understanding of this is that the prepared statements are precompiled by the driver, then cached, which improves performance for cached prepared statements.
My question is, does this mean that the database never has to compile those prepared statements? Does the JDBC driver send some precompiled representation, or is there still some kind of parsing/compilation that happens in the database itself?
When you use the implicit statement cache (or the Oracle Extension for the explicit Statement Cache) the Oracle Driver will cache a prepared- or callable statement after(!) the close() for re-use with the physical connection.
So what happens is: if a prepared Statement is used, and the physical connection has never seen it, it sends the SQL to the DB. Depending if the DB has seen the statement before or not, it will do a hard parse or a soft parse. So typically if you have a 10 connection pool, you will see 10 parses, one of it beein a hard parse.
After the statement is closed on a connection the Oracle driver will put the handle to the parsed statement (shared cursor) into a LRU cache. The next time you use prepareStatement on that connection it finds this cached handle to use and does not need to send the SQL at all. This results in a execution with NO PARSE.
If you have more (different) prepared statements used on a physical connection than the cache is in size the longest unused open shared cursor is closed. Which results in another soft parse the next time the statement is used again - because SQL needs to be sent to the server again.
This is basically the same function as some data sources for middleware have implemented more generically (for example prepared-statement-cache in JBoss). Use only one of both to avoid double caching.
You can find the details here:
http://docs.oracle.com/cd/E11882_01/java.112/e16548/stmtcach.htm#g1079466
Also check out the Oracle Unified Connection Pool (UCP) which supports this and interacts with FAN.
I think that this answers your question: (sorry it is powerpoint but it defines how the prepared statement is sent to Oracle, how Oracle stores it in the Shared SQL pool, processes it, etc). The main performance gain you are getting from Prepared statements is that on the 1+nth run you are avoiding hard parses of the sql statement.
http://www.google.com/url?sa=t&source=web&cd=2&ved=0CBoQFjAB&url=http%3A%2F%2Fchrisgatesconsulting.com%2FpreparedStatements.ppt&rct=j&q=java%20oracle%20sql%20prepared%20statements&ei=z0iaTJ3tJs2InQeClPwf&usg=AFQjCNG9Icy6hmlFUWHj2ruUsux7mM4Nag&cad=rja
Oracle (or db of choice) will store the prepared statement, java just send's it the same statement that the db will choose from (this is limited resources however, after x time of no query the shared sql will be purged esp. of non-common queries) and then a re-parse will be required -- whether or not it is cached in your java application.

Implementing user-defined db parameters/properties in Oracle

OK, the question title probably isn't the best, but I'm looking for a good way to implement an extensible set of parameters for Oracle database applications that "stay with" the host/instance. By "stay with", I mean that I'd like to rule out just having an Oracle table of name/value pairs that would have to modified if I create a test/QA instance by cloning the production instance. (For example, imagine a parameter called email_error_address that should be set to prod_support#abc.com in production and qa_support#abc.com in testing).
These parameters need to be accessed from both PL/SQL code running in the database as well as client-side code. I started out doing this by overloading the plsql_cc_flags init parameter (not a solution I'm proud of), but this is getting messy to maintain and parse.
[Edit]
Ideally, the implementation would allow changes to the list without restarting the instance, similar to the dynamically-modifiable init parameters.
You want to have a separate set of values for each environment. You want these values to be independent of the data, so that they don't get overridden if you import data from another instance.
The solution is to use an external table (providing you are on 9i or higher). Because external tables hold the data in an OS file they are independent of the database. To apply changed values all you need to do is overwrite the OS file.
All you need to do is ensure that the files for each environment are kept separate, This is easy enough if Test, QA, Production, etc are on their own servers. If they are on the same server then you will need to distinguish them by file name or directory path; in either case you may need to issue a bit of DDL to correct the location in the event of a database refresh.
The drawback to using external tables is that they can be a bit of a performance overhead - they are really intended for bulk loading. If this is likely to be a problem you could use caching, with a user-defined namespace or CONTEXT. Load the values into memory using DBMS_SESSION.SET_CONTEXT() either on demand on with an ON LOGON trigger. Retrieve the values by wrapper calls to SYS_CONTEXT(). Because the namespace is in session memory retrieval is quite fast. René Nyffenegger has a simple example of working with CONTEXT: check it out.
While I've been writing this up I see you have added a requirement to change things on the fly. As I have said already this is easy with an OS file, but the use of caching makes things sightly more difficult. The solution would be to use a globally accessible CONTEXT. Have a routine which loads all the values at startup which you can also call whenever you refresh the OS file.
You could use environment variables that you can set per oracle user (the account that starts up the Oracle database) or per server. The environment variables can be read with the DBMS_SYSTEM.GET_ENV procedure.
I tend to use a system_parameters table. If your concerned with it being overwritten put it in it's own schema and make a public synonym.
#APC's answer is clever.
You could solve the performance overhead by adding a materialized view on top of the external table(s). You would refresh it after RMAN-cloning, and after each update of the config files.

Loading data to shared memory from database tables

Any idea about loading the data from database to shared memory, the idea is to fasten the data retrieval from frequently used tables?
the server will automatically cache frequently used tables. So I would no optimize from the server side. Now, if the client is querying remotely you might consider coying the data to a local database (like the free SQL Express).
You are talking about cache.
it is easily implemented.
but there are some tricks you need to remember:
You will need to log changes in the underlying table - and reload the cache when they happens.
(poll a change table).
Some operation might be faster inside the database then in your own memory structure.
(If you intereseted in a fast data access with no work at all there are some in-memory Databases that can do the trick for you).

How to performance test nested Sybase stored procedures?

I am looking for any tool which will allow the performance testing/tuning of Sybase nested stored procedures. There are many tools around and of course Sybase's own for performance tuning and testing SQL but none of these can handle nested stored procedures (i.e. a stored proc calling another stored proc). Does anyone have/know of such a tool?
I don't know anything that does this but I'd love to see a tool that does. What I tend to do in this situation is to try to establish which of the nested stored procedures is consuming the most resources or taking the longest and then performance tuning that procedure in isolation.
I am not sure which Sybase DB you are using at the moment, but have you tried the Profiler in the Sybase Central tool? Right Click on DB Connection and then select PROFILE (or PROFILER???)
I have used it in the past for single stored procedures but I do not recall if it works all the way down the calling chain from one SP to another. At the least it should tell you how long each sub-SP which was called from your initial SP took and then you can home in on the procedures needing the most time.
I hope that helps.
Cheers,
Kevin
Late to the game, but in Sybase you have the option of using "SET FMTONLY" to get around "SET NOEXEC" turning off the evaluation of the nested procedure.
For example:
suppose:
sp_B is defined
sp_A is defined and calls sp_B
Then, the following will show the execution plans for both sp_A and sp_B
SET SHOWPLAN ON
GO
SET FMTONLY ON
GO
sp_A
GO
See the sybase writeup here...this worked in ASE 12.5 as well as ASE 15.
Using set showplan with noexec

Resources