How do I determine whether a database connection is read/write or read-only in GRDB? - grdb

In GRDB I can create read-only database connections with
try dbPool.read { db in
I can create read-write database connections with
try dbPool.write { db in
In each case I can call a function from within the closure using db as an argument. In GRDB read-only database connections can run in parallel in different threads. Write database connections block the database.
I have a pattern I would like to use in some functions:
Select a record filtered on some attributes.
If a record is found, return it.
If no record is found, create a default placeholder record with those attributes.
If the db is read-write, save the default placeholder and return it.
If the db is read-only, just return the placeholder
Is there a way to determine if the db is read-only? Looking through the reference documentation at http://groue.github.io/GRDB.swift/docs/4.1/Classes/Database.html, I don't see a simple way to determine this aside from error handling.
I could just attempt to save in the function and catch the error if the db is read-only.

Is there a way to determine if the db is read-only?
No, there isn't any public way to get this information. It would be possible, but the feature request hasn't shown up yet.
I could just attempt to save in the function and catch the error if the db is read-only.
Correct.
I have a pattern I would like to use in some functions [...]
The pattern you describe looks like a premature optimization for a plain simple write. Do you notice actual performance problems?

Related

What is the differences between Session and Local (client-side only) Collection?

In Meteor, I have a little confusion between Session and Local Collection.
I know that Session is a temporary reactive key-value store, client-side only, and is cleaned on page refresh.
Local collection seems to be the same: reactive, temporary client-side storage, cleaned on page refresh with more flexible function like insert, update & remove query like server-side Mongo collection.
So I guess I could manage everything in Local Collection without Session, or, everything in Session without Local Collection.
But what is the best and efficient way to use Session and/or Local collection?
Simply, when to use Session and not use it?
And when to use Local collection and when not use it?
As I read your question I told myself that this is a very easy question, but then I was scratching my head. I tried to figure out an example that you can just accomplish with session or collections. But I didn't found any use-case. So let's rollup things from begin. Basically you already answered the question on your own, because it is the little sugar that makes collections something special.
When to use a collection?
Basically a collection is a database artifact. Imagine you have a client-server-application. All the data is persisted in the server side storage. Now you can use a local collection to provide the user a small subset of the servers collection. So a client collection is a database with reduced amount of data. The advantage is that you can access the collection with queries. You can use the same queries on server and client. In additon a collection always contains multiple objects of the same type. Sometimes you produce data on client for the client. No server interaction needed. Than you can use a local collection. A local collection provides the same functionality as a normal collection without server communication. This should be used if you have multiple objects with the same structure and in special if you'd like to use query operators.
You can also save the data inside a session object. Session objects can contain multiple objects as well. But imaging you want to find an object in an objectarray indexed with a special id. Than you need to iterate throw the whole array in order to find this object. You have to write additional logic, that can be handled with collection like magic. Further, collections return cursors. A cursor is an reactive object that just changes if the selected data changes. That means if you use find with an id. Than this object just rerenders when the object to this id changes. With session you can't. When a session changes you need to rerender all depending objects.
When to use a session?
For everything else. Sessions are often just small objects that contain some configuration logic. It is basically just one object and not a multiple occurency of equal objects. Haven't time now to go in detail but if it does not fit the collection use-cases you can use sessions.
Have a look at this post that describes why sessions should not be overused.
I assume that by local collection you mean: new Mongo.Collection(null)
The difference is that local collections do not survive hot code pushes. A refresh will erase Session, but hot code push will not, there's special code in Meteor to persist the values of the Session variable in the case of a hot code push..
You would use Session whenever you're storing temporary values that do NOT need to be persisted to the database.
Trivial examples could include a users selection of filters or the item in an index vies that is currently selected.
manipulated data in minimongo (insert, update, delete etc) is intended to be sent back to the server and stored in the database. For example this could be updating a users profile information etc.

Why we require global temporary tables, their practical use cases

I just got to know about the global temporary tables(I use oracle as db) and understood them but not able to know the exact use of these.
As the name suggests they are used to keep temporary data/session and on closing session the data vanishes for that session.
What can be the examples, business use cases where we need to use them.
A possible use case would be a user that has only rights to create reports. These reports use a lot of aggregated data and these information is reused in more than one of the reports. So it would be a good idea to cache the information for this session and this can be done by using temporary tables.
Another use case would be that you have a reporting framework and want to have an easy way to query information. So you would just call a PLSQL package that puts together the information in a temporary table and after the package has finished you can just do a SELECT * FROM temporary_table.

Can I use EF and Linq, if I am getting connectionstring at runtime?

We have a database where all connection strings are saved in a table. I need to use these connection strings to obtain data in their respective database. I can do it running queries but I want to use Linq and EF. How can I do it?
Thanks a lot.
Let me rephrase again, I am having access to database which will have a table containing multiple connection strings. I want to map database at runtime and then use it to retrieve the data. Is it possible?
ObjectContext exposes this constructor, which takes a connection string argument, so you can do:
ObjectContext yourContext = new ObjectContext(yourConnectionString);
It is not possible to map database at runtime. It will work only if your application knows mapping for every database and has all necessary classes prepared = you will have to create it in design time.

Implementing user-defined db parameters/properties in Oracle

OK, the question title probably isn't the best, but I'm looking for a good way to implement an extensible set of parameters for Oracle database applications that "stay with" the host/instance. By "stay with", I mean that I'd like to rule out just having an Oracle table of name/value pairs that would have to modified if I create a test/QA instance by cloning the production instance. (For example, imagine a parameter called email_error_address that should be set to prod_support#abc.com in production and qa_support#abc.com in testing).
These parameters need to be accessed from both PL/SQL code running in the database as well as client-side code. I started out doing this by overloading the plsql_cc_flags init parameter (not a solution I'm proud of), but this is getting messy to maintain and parse.
[Edit]
Ideally, the implementation would allow changes to the list without restarting the instance, similar to the dynamically-modifiable init parameters.
You want to have a separate set of values for each environment. You want these values to be independent of the data, so that they don't get overridden if you import data from another instance.
The solution is to use an external table (providing you are on 9i or higher). Because external tables hold the data in an OS file they are independent of the database. To apply changed values all you need to do is overwrite the OS file.
All you need to do is ensure that the files for each environment are kept separate, This is easy enough if Test, QA, Production, etc are on their own servers. If they are on the same server then you will need to distinguish them by file name or directory path; in either case you may need to issue a bit of DDL to correct the location in the event of a database refresh.
The drawback to using external tables is that they can be a bit of a performance overhead - they are really intended for bulk loading. If this is likely to be a problem you could use caching, with a user-defined namespace or CONTEXT. Load the values into memory using DBMS_SESSION.SET_CONTEXT() either on demand on with an ON LOGON trigger. Retrieve the values by wrapper calls to SYS_CONTEXT(). Because the namespace is in session memory retrieval is quite fast. René Nyffenegger has a simple example of working with CONTEXT: check it out.
While I've been writing this up I see you have added a requirement to change things on the fly. As I have said already this is easy with an OS file, but the use of caching makes things sightly more difficult. The solution would be to use a globally accessible CONTEXT. Have a routine which loads all the values at startup which you can also call whenever you refresh the OS file.
You could use environment variables that you can set per oracle user (the account that starts up the Oracle database) or per server. The environment variables can be read with the DBMS_SYSTEM.GET_ENV procedure.
I tend to use a system_parameters table. If your concerned with it being overwritten put it in it's own schema and make a public synonym.
#APC's answer is clever.
You could solve the performance overhead by adding a materialized view on top of the external table(s). You would refresh it after RMAN-cloning, and after each update of the config files.

Disable Hibernate auto update on flush on read only synonyms

I have a table and two databases which have the same table, but one is a symlink of the other one and only read is permitted on this table.
I have mapped the table to Java using Hibernate and I use spring to set the Entity Manager's data source as one of the two databases based on some input criteria.
I call only read only operations (selects) when I am connected to the second database, but it seems Hibernate tries to flush something back to the database and it fails telling update is not allowed on this view.
How do I disable this update only for the second datasource and keep it normal for the first one?
Update:
Looking at the stack trace, the flush seems to be started here:
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365)
at org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:504)
... 55 more
Is this related to hibernate.transaction.flush_before_completion property? Can I set it to false for the second data source?
Most probably your entities become "dirty" the same moment they are loaded from the database, and Hibernate thinks that it needs to store the changes. This happens, if your accessors (get and set methods) are not returning the exact same value or reference that had been set by Hibernate.
In our code, this happened with lists, developers created new list instances because they didn't like the type they got in the setter.
If you don't want to change the code, change the mapping to field access.
You can also prevent Hibernate of storing changes by setting FlushMode to never on the session, but this only hides the real problem which will still occur in other situations an will lead to unnecessary updates.
First you need to determine if this is DDL or DML. If you don't know, then I recommend you set hibernate.show_sql=true to capture the offending statement.
If it is DDL, then it's most likely going to be Hibernate updating the schema for you and you'd want to additionally configure the hibernate.hbm2ddl.auto setting to be either "update" or "none", depending on whether you're using the actual db or the symlinked (read-only) one, respectivley. You can use "validate" instead of none, too.
If it is DML, then I would first determine whether your code is for some reason making a change to an instance which is still attached to an active Hibernate Session. If so, then a subsequent read may cause a flush of these changes without ever explicitly saving the object (Grails?). If this is the case, consider evicting the instance causing the flush ( or using transport objects instead ).
Are you perhaps using any aspects or Hibernate lifecycle events to provide auditing of the objects? This, too, could cause access of a read-only to result in an insert or update being run.
It may turn out that you need to provide alternative mappings for the offending class should the updatability of a field come into play, but the code is doing everything exactly as you'd like ( this is unlikely ;0 ). If you are in an all-annotation world, this may be tricky. If working with hbm.xml, then providing an alternative mapping is easier.

Resources