According to http://nhibernate.info/doc/nh/en/index.html#manipulatingdata-exceptions, after a database exception the Session should be discarded.
Now, in our web app, in some cases, it's normal to throw and catch ADOExceptions. For instance for constraint violations.
According to the document linked to we should then abandon the session. However, we still want to do some work with the database if we get a constraint violation so I need a new session.
In our tests we do this by calling
CurrentSessionContext.Unbind(SessionFactory).Close();
CurrentSessionContext.Bind(SessionFactory.OpenSession());
but in the web app we don't use CurrentSessionContext, we use LazySessionContext. So we can't directly reference the CurrentSessionContext in our business classes since it isn't used from the web and we can't reference the LazySessionContext since the HttpContext is not available during integration testing.
Is there a way to dispose and recreate a session and connect it to the current context, without directly referencing the context class? I have the SessionFactory object and the Session object.
Without wanting to sound critical I would suggest that you need to rethink the design of your application. You should implement an interface either through the use of combo boxes for instance or validation that would prevent users entering data that would cause ADOExceptions such as constraint violations. If these do then occur they are then exceptional circumstances that you can report to your users as an internal error and maybe log that error through a separate mechanism such as through the health monitoring built into ASP.NET.
I would also add that your entities may need another look as constraint violations are not something you normally need to worry about when you are using NHibernate.
Related
I was looking at saving some data to my Room database and was reevaluating as there are some places in my repositories where I am extending AsyncTask (I'm still using Java) and wanted to check on the state of things to see if it was a good time to swap them out. I saw this reference in the Android developer site on Approaches to background work.
All persistent work: You should use WorkManager for all forms of
persistent work. Immediate impersistent work: You should use Kotlin
coroutines for immediate impersistent work. For Java programming
language users, see Threading on Android for recommended options.
Long-running and deferrable impersistent work: You should not use
long-running and deferrable impersistent work. You should instead
complete such tasks through persistent work using WorkManager.
I started using WorkManager for an API which needed to be called, but for which I could not rely on network connectivity. Because I'm using Room, which is persistent, it seems like I should be using WorkManager.
It defines persistent work as:
Persistent work: Remains scheduled through app restarts and device reboots.
A database insert/update/delete is persistent by this definition. Scheduled throws me off a little, as I want it to be immediate, but according to this chart that would still apply.
Is anybody using WorkManager as the mechanism for CUD operations in their repositories and if so, do they have an example?
It would be great to see how this all works in an update fragment. If a single item is selected and I am viewing it in a fragment, when changes are made I would need to update the database using a Worker class and view the data using a LiveData object, correct?
Inserts and returning the id (or object) would be interesting to see as well.
My MVC application connects in Oracle database. We created a lot of triggers to save all data changed by users.
inside the trigger, we used the code bellow to get authentcated user:
UPPER(SYS_CONTEXT('USERENV', 'OS_USER'))
When i'm running my application in localhost, the database get the correct user, but when i plublish it on server (IIS), the database always get as user the application pool name.
Is there some IIS configuration that i need to set to get "Windows authentication" user? Is there another way to get this information inside oracle function/trigger?
You would realistically want to use a secure application context which is basically a user-controlled context unlike the system-controlled USERENV context. When the application code gets a connection from the pool, it would call a stored procedure that sets the application username in the new application context. Your triggers would then reference the new context rather than USERENV. Your application needs to ensure that the context is set appropriately every time a connection is acquired from the pool-- if the application fails to set the context correctly, your triggers will get the wrong information.
If you don't want to create your own context, you could use the CLIENT_IDENTIFIER in USERENV which you can set via dbms_session whenever you get a connection from the pool. Functionally, this is basically identical to creating your own context. The nice thing about creating your own context, though, is that you can seamlessly add attributes in the future as you identify the need (i.e. adding the IP address of the client browser or a tier attribute if you have gold, silver, and bronze customers).
There are alternate ways to approach the problem such as using proxy authentication. In general, though, that's not going to work as well with connection pools particularly when you have very large numbers of users.
If I have a number of Grails domain objects that I do not want to save just yet, but still access them throughout my application, is it wise to store them in the Grails / Hibernate session (especially as regards peformance)? If not, what is the alternative?
What do you mean by the grails / hibernate session?
If you really mean the Hibernate session, adding an object to it will provoke the object to be saved automatically when the session is flushed (unless the object doesn't validate, in that case it will be lost once the session is discarded). A session is created and discared per request.
If you mean the session object that gets automatically injected into controllers and views, it's nor grails neither Hibernate specific, but just the old, plain HttpSession from the Servlet specification (see http://docs.oracle.com/javaee/7/api/javax/servlet/http/HttpServletRequest.html).
You can use that to store any kind of object if you need to access them across multiple requests of the same client. Meaning the session is private to a given client (who identifies it throught the jsessionid cookie) and survives multiple requests. If you don't need the multiple request bit, adding them as a request attribute would suffice.
Putting things in the session is generally fine and fast (since by default is based on memory), but it will increase the memory footprint of the application if abused, and will prevent horizontal scaling (i.e deploying the same application in multiple instances) unless sticky session mechanisms are used (or the session is persisted).
Bear in mind though that grails uses a new Hibernate session per request (not an Http session :), so if you add objects that are attached to a Hibernate session to the Http session, and then the Hibernate session is closed, you might encounter problems. This shouldn't affect non-saved objects (they don't come from a Hibernate session), but it might affect their associations (other domain classes that do come from the database and therefore a Hibernate session). If that's the case, you might need to re-attach them. See https://grails.github.io/grails-doc/latest/ref/Domain%20Classes/attach.html
Also, if the session is invalidated (because the user logs out, or the server is re-deployed) everything that was stored in there will be gone.
If you don't want to rely on sessions at all, you can create your own a MemoryBasedStoreService service and use a ConcurrentHashMap or a similar mechanism to store and retrieve the objects. Since services are singleton in Grails, you can use it across the whole application, regardless of requests or clients - as long as your application is deployed in a single instance of course :).
In a web application, I'm using JPA entities to persist (and retrieve) my domain objects to (and from) an underlying database.
These JPA entities are kept "hot" in an in-memory cache structure (think of a Map<UniqueID, Entity>) for the whole running time of the web application.
So I'm doing a request to my web application, an entity gets loaded from the repository. This entity gets put into the in-memory cache structure. For the whole lifetime of this request, I can happily access any fields of this entity. During this first request, also lazily-loading relationships to other entities works fine, even in my View: I'm successfully using the Open-Session-in-View pattern (via Spring's OpenEntityManagerInViewInterceptor).
The first request has ended.
I'm doing the next request to my web application. This request asks for another entity. This entity is already in the in-memory cache structure, so it gets loaded from there. From this entity, I try to access a field that should lazily-load relationships to other entities. This unfortunately causes the obnoxious org.hibernate.LazyInitializationException: could not initialize proxy - no Session (I'm using Hibernate as my underlying JPA implementation)
To my understanding, this exception stems from the fact that after the first request has ended, JPA/Hibernate has ended any JPA sessions, yet entities in my in-memory cache structure still expect any of these sessions to exist; at the moment that the next request causes to fire the mechanism for lazily-loading entities, the lazily-loading mechanism can't find any no-longer existing session.
What are solutions to my problem?
One of solutions is to reattach the entity to session at the beginning of the second request using Session.update().
Another solution is to use the second level cache in Hibernate instead of your own solution. It should be much more reliable than any home-grown caching mechanism.
Basically, you cannot keep sessions alive across HTTP requests because this would mean to keep the transaction open between requests.
I think the only solution (apart detecting yourself what is loaded and what is not) is to fetch the whole entity before putting it in your cache. IMHO you shouldn't put partially loaded objects in a cache. If you don't want to load the whole object the first time, you might use separate caches for the objects relationships.
If you want, you might also consider enabling Hibernate cache as proposed by #Adam, but I don't think it would work well with lady-loaded fields.
You are getting this exception because your object is detached from current session. You have to re-attach this object to current session before assessing it
session.update(object);
You can read details here
I am using hibernate, spring, jpa.
In a workflow I update an entity; but these updates are not available in another workflow. When I restart the server it works fine.
Is there a way so that when I update an entity; I ask hibernate to remove it from whatever cache it has.. So that when that object is needed by any other workflow a fresh query is made ?
This sounds like you have two separate sessions for the same app, thus, having two 1st level caches. The first level cache is the one that Hibernate uses for itself, in the context of a session. So, if you don't close/clear your session, this will keep growing, possibly conflicting with other 1st level caches (in other threads or in other VMs). It's hard to say if that's the case, as you didn't specify your environment, but you can't change another session's first level cache.
The best solution to avoid this is to use a managed EntityManager (from your application server) to deal with entities. It's then the server's role to deal with this kind of scenario. But it seems that you are doing it the "spring way", so, you'll have to do it manually: either clear the session after you use it, or do a refresh before reading/updating your data. You'll then need some sort of locking (pessimistic/optimistic) to not lose information that might have been changed from another thread.