We had a very unusual situation at work. Giving a bit of context first, the setup is: we have a Java application, using Spring Boot and Hibernate, connected to an Oracle RAC database.
We added an item to one of the database tables using that application. We could SELECT this object, and we could UPDATE it... but only for a couple of minutes. Then it simply vanished.
We though it could have been removed by someone, but according to the DBA logs, there wasn't any INSERT in that table during that whole day; although, the person which added the item happened to be recording her screen, so we know for sure that it happened, and when it happened.
How could this situation happen? There were no errors in the application logs indicating that there could have been an error during commit. And during the times we updated the item, it had to be listed first, using a SELECT.
Also, we never had a situation like that one before, and we have been using that same database, and that same application, together, for over a year.
In case I missed any relevant details, just ask.
Related
I am creating an application in Oracle for a piece of coursework.
I've been using it for a few weeks and have just come across an issue;
I need to submit this on friday, and it's a mess, so I created a new application and tried to add a new page, but none of the tables that I have created are available to choose... I've tried dropping all my previous tables and objects etc. and then re-inserted everything back in. My SQL queries etc work fine.
All of my tables appear in the SQL Browser. I've even tried creating a new application a few times and trying different combinations of thing, but to no avail.
I have searched and searched google and online help for Oracle etc, but with no results
I am using my Universitys local oracle apex, but this is happening on the normal apex as well....
Any help would be appreciated... freaking out as this is due in less than 2 days!
After consulting one of my lectures, he informed me that as I am referencing an object table, that they won't show up on the drop down list - you need to reference it using SQL instead
Until very recently we ran a 3rd party HR database on an Oracle Unix environment. I have additionally set up various web services that hit stored procedures to carry out a few bespoke processes for our users, and all ran well for years.
However, now that we have moved to Oracle on a Windows environment there is suddenly a big problem.
The best example I have is a VB.Net solution that reads in a 2000 row CSV of employees into a datatable, runs a couple of stored procedures to bring back Post Id etc, populates a database table with the results, then feeds it all back out into a new CSV. This process used to take 1-2 minutes to complete on Unix. It now takes well over 2 hours and kills the server!
The problem manifests by overwhelming the CPU on the database server. Any stored procedure call sends Oracle.EXE into overdrive, completely max-ing out the CPU core that it's using such that no other stored procedures can be run and everything grinds to a halt.
We have run Oracle Enterprise Manager, which suggested the creation of some indexes etc, but nothing will improve the issue. Like I say, the SQL ran fine and swiftly for years, and it hasn't changed at all.
Does anybody know what could be causing this? I am completely at a loss.
The way I see it, it must either be:
1. A CPU/hardware issue (but we have investigated, added extra cores etc to no avail)
2. An Oracle configuration issue?; or
3. An issue with the 3rd party database (which is supposedly identical to what it was on Unix).
Thanks to anyone who read this far.
P.S. I've had a Stack Overflow user account for years but can't get logged into it any more. Back to noobie status for me!
Try to be more clear, I'm in lack of ideas in this problem, even it sounds like a classic.
My application is running on weblogic 10.3.3 application server, and for database I am using Oracle database 11g. My problem is that there is table in db, let's say "user.", there is column, let's say "columnA", in this table. This table is updating by some module of application.
What I want if when value of column is "abc.", then I have to show alert to console(IP). {IP can be retrieved from DB as it is configured in DB. this ip will be other linux system other than linux machine where oracle database is installed.} Updating is continuously done on my table from module of application. Please tell me from where should I start?, what should I read. I am not able to understand what should be approach. Any help is much appreciated.
A trigger on the table can call UTL_HTTP to communicate with another machine (eg call a RESTful API).
The architectural questions are :
This will happen PRIOR to the commit so you may get false alerts if a change is rolled back
If you wait for a response, it will slow the system down.
What do you do if you get an non-standard response (eg the other server isn't available)
I have somewhere around 20 tables that I am working with. I can update the User table just fine, however, when I try to update my Address table, nothing happens. I don't receive an exception and the method looks like it executes ok but when I check my data, the values are still the same.
I'm thinking that it has to do with the fact that I moved my database out from under a server and onto my local SQL lite instance. I did change the connection strings in the config and thought that it would take care of the problem (as i stated, I can still select from all of the tables using linq). Has anyone encountered this before or have some idea of what might be going on?
Edit 1 - I'm not very familiar with relocating databases with linq. I do know that SQLMETAL, when I run it, removes all of the customization that i have done inside of my datacatalog. Does just changing the connection in the config work or do I actually have to use SQLMETAL every time the db moves (the structure doesn't change)
When I did this I had to hand modify the constructor to use a different connection string in the File that Visual Studio generates for your instance of the database. I had the same issue as you did and this fixed the problem for me.
Is there any technology out there that will allow you to do side-by-side updates of production schemas?
The goal is to have zero down time when applying updates to a schema in production.
Weblogic 10 has a similar feature for their Java EE apps where by you deploy the new version of the app and new connections go to the new app, while the existing connections continue to the old app. When all the old connections complete/timeout, the old app is retired and the new app continues on...zero down time.
Is there something similar in Oracle?
Yes. There is online redefinition package.
DBMS_Redefinition
But I doubt this will give you zero downtime, this doesn't account for every possible change to a schema. This lets you do some table changes. I think you need to define zero and how extensive the changes you want to make. Usually if you change the database, you have to change your client as well. If you changed your database, how would the client switch automatically from the old proc signature to the new proc signature - Instantaneously?
Databases don't work like apps. There either is a FK from tableA to tableB or there isn't... it can't not be there for current connection and exist only for new connection in the same manner as your application can. Databases just aren't the same.
That being said, there is rumor that Oracle is working on package versioning... so you could connect to a specific version of a package to make such a migration simpler. But again... that would work for packages, DBMS_redef would work for tables... but that's not the sum total of your database.
Oracle release today 11gr2, it has edition-based redefinition: http://download.oracle.com/docs/cd/E11882_01/server.112/e10881/chapter1.htm#NEWFTCH1
Depends what you mean, or include, in "schema".
If you want to add or drop an index, that can be done "in-flight", although it will require a lock which may halt activity for a time. In the latest Oracle versions, it doesn't need to hold the lock for the entire time it takes to build the index, just for a moment to lock in the change. If you have short-duration transactions it shouldn't be noticeable.
In some cases that applies to tables as well (eg adding a nullable or default column).
If you use PL/SQL (especially packages), things can be a little more complicated. Enhancements were mooted for 11gR1 to enable the in-flight application upgrade, but it got pushed out and is now expected in 11gR2 (probably out first half next year).
In the meantime, a workaround is a multi-schema solution. Say your data sits in one schema ("yellow") and your current application code is running in "blue" schema, you load your new application into "green schema". You switch your connections, one by one, from blue to green. Once your connections are all using "green", you can retire "blue" until your next upgrade (when "blue" becomes the new app and "green" is retired).
If you have a genuine 24/7 system, you'll probably always have to stage some upgrades. For example, add a new column as optional, upgrade the application to set it, then make it mandatory (possibly with some data change script for pre-existing rows).