On RDS:
I have a table "stats". It is ~400GB
I try to create an index for it: CREATE INDEX IDX_...
Index creations starts successfully.
It takes 10 hours long.
On OCI:
I have a table "stats". It is ~400GB
I try to create an index for it: CREATE INDEX IDX_...
Index creations starts successfully and continue for a while.
After 20 minutes, process disappeared(Except ssh(ServerAliveInterval 240)).
For ssh(ServerAliveInterval 240), it takes 10 hours to complete.
I tried it with 4 different client:
sql-developer -> didn’t work
java -> didn’t work
ssh -> didn't work
ssh(ServerAliveInterval 240) -> worked (when we keep alive the session, it worked.)
So my question is so clear. Why does it work with only ssh(ServerAliveInterval 240)? and is there a way to handle it on Java with some configuration? It should have a configuration because RDS solves it.
Any suggestion?
It is related with EXPIRE_TIME in sqlnet.ora https://docs.oracle.com/database/121/NETRF/sqlnet.htm#NETRF209
it is 10(minutes) in OCI. Set as 0 and solved problem.
Related
Sonarqube is showing incorrect total for projects and issues.
Total projects says 4 however there are only 2. Claims there are 78 bugs but there are none, and none get displayed in the results section. (see below)
I've checked the database a grouping the [projects] table by [project_uuid] only returns 2 rows.
Sonarqube v6.2 is being used, with an SQL Server database if that makes any difference. Could this be a setup issue, I only setup this instance a few days ago but I am not sure where to check other than the database where the projects table at least.
When your issue counts are scrambled like this, it means the ElasticSearch index is corrupted.
shut the server down
delete $SONARQUBE_HOME/data/es
start the server back up
Startup will take a little longer because there's an added delay while the index is rebuilt. The duration of this delay is dependent on the size of your instance.
Once your server comes back up, your numbers should be right.
This may have been asked numerous times but none of them helped me so far.
Here's some history:
QueryTimeOut: 120 secs
Database:DB2
App Server: JBoss
Framework: Struts 2
I've one query which fetches around a million records. Yes, we need to fetch it all at once for caching purpose, sadly can't change the design.
Now, we've 2 servers Primary and DR. In DR server, the query is getting executed within 30 secs, so no timeout issue there. But in Primary serverit is getting time out due to some unknown reason. Sometimes it is getting timed out in rs.next() and sometime in pstmt.executeQuery().
All DB indexes, connection pool etc are in place. The explain plan shows, there are no full table scan as well.
My Analysis:
Since query is not the issue here, there might be issue in Network delay?
How can I get the root cause behind this timeout. How can I make sure there are no connection leakage? (Since all connection are closed properly).
Any way to recover from the timeout and again execute the query with increased timeout value for e.g: pstmt.setQueryTimeOut(600)? <- Note that this has no effect whatsoever. Don't know why..!
Appreciate any inputs.
Thank You!
I am trying to update from 4.0 to 4.5.1 but the process always fails at UpdateMeasuresDebtToMinutes. I am using MySQL 5.5.27 as a database with InnoDB as table engine.
Basically the problem looks like this problem
After the writeTimeout exceeds (600 seconds) there is an exception in the log
Caused by: java.io.EOFException: Can not read response from server. Expected to read 81 bytes, read 15 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3166) ~[mysql-connector-java-5.1.27.jar:na]
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3676) ~[mysql-connector-java-5.1.27.jar:na]
Adding the indexes as proposed in the linked issue did not help.
Investigating further I noticed several things:
the migration step reads data from a table and wants to write back to the same table (project_measures)
project_measures contains more than 770000 rows
the process always hangs after 249 rows
the hanging happens in org.sonar.server.migrations.MassUpdate when calling update.addBatch() which after the BatchSession.MAX_BATCH_SIZE (250) forces an execute and a commit
is there a way to configure the DB connection to allow this to proceed?
First of all, could you try to revert your db to 4.0 and try again ?
Then, could you please give us the JDBC url (sonar.jdbc.url) you're using ?
Thanks
As I need that sonar server to run I finally implemented a workaround.
It seems I cannot write to the database at all, as long as a big result set is still open (I tried with a second table but the same issue as before).
Therefore I changed all migrations that need to read and write the project_measurestable (org.sonar.server.db.migrations.v43.TechnicalDebtMeasuresMigration, org.sonar.server.db.migrations.v43.RequirementMeasuresMigration, org.sonar.server.db.migrations.v44.MeasureDataMigration) to load the changed data into a memory structure and after closing the read resultset write it back.
This is as hacky as it sounds and will not work for larger datasets where you would need to this with paging through the data or storing everything into a secondary datastore.
Furthermore I found that later on (in 546_inverse_rule_key_index.rb) an index needs to be created on the rules table which is larger than the max key length on mysql (2 varchar(255) columns with UTF-8 is more than 1000bytes .. ) so I had to limit the key length on that too ..
As I said, it is a workaround and therefore I will not accept it as an answer ..
I am in the process of creating an Oracle to Vertica process!
We are looking to create a Vertica DB that will run heavy reports. For now is all cool Vertica is fast space use is great and all well and nice until we get to the main part getting the data from Oracle to Vertica.
OK, initial load is ok, dump to csv from Oracle to Vertica, load times are a joke no problem so far everybody things is bad joke or there's some magic stuff going on! well is Simply Fast.
Bad Part Now -> Databases are up and going ORACLE/VERTICA - and I have data getting altered in ORACLE so I need to replicate my data in VERTICA. What now:
From my tests and from what I can understand about Vertica insert, updates are not to used unless maybe max 20 per sec - so real time replication is out of question.
So I was thinking to read the arch log from oracle and ETL -it to create CSV data with the new data, altered data, deleted values-changed data and then applied it into VERTICA but I can not get a list like this:
Because explicit data change in VERTICA leads to slow performance.
So I am looking for some ideas about how I can solve this issue, knowing I cannot:
Alter my ORACLE production structure.
Use ORACLE env resources for filtering the data.
Cannot use insert, update or delete statements in my VERTICA load process.
Things I depend on:
The use of copy command
Data consistency
A max of 60 min window(every 60 min - new/altered data need to go to VERTICA).
I have seen the Continuent data replication, but it seems that nowbody wants to sell their prod, I cannot get in touch with them.
will loading the whole data to a new table
and then replacing them be acceptable?
copy new() ...
-- you can swap tables in one command:
alter table old,new,swap rename to swap,old,new;
truncate new;
Extract data from Oracle(in .csv format) and load it using Vertica COPY command. Write a simple shell script to automate this process.
I used to use Talend(ETL), but it was very slow then moved to the conventional process and it has really worked for me. Currently processing 18M records, my entire process takes less than 2 min.
this is my first question, I've searched a lot of info from different sites but none of them where conslusive.
Problem:
Daily I'm loading a flat file with an SSIS Package executed in a scheduled job in SQL Server 2005 but it's taking TOO MUCH TIME(like 2 1/2 hours) and the file just has like 300 rows and its a 50 MB file aprox. This is driving me crazy, because is affecting the performance of my server.
This is the Scenario:
-My package is just a Data Flow Task that has a Flat File Source and an OLE DB Destination, thats all!!!
-The Data Access Mode is set to FAST LOAD.
-Just have 3 indexes in the table and are nonclustered.
-My destination table has 366,964,096 records so far and 32 columns
-I haven't set FastParse in any of the Output columns yet.(want to try something else first)
So I've just started to make some tests:
-Rebuild/Reorganize the indexes in the destination table(they where way too fragmented), but this didn't help me much
-Created another table with the same structure but whitout all the indexes and executed the Job with the SSIS package loading to this new table and IT JUST TOOK LIKE 1 MINUTE !!!
So I'm confused, is there something I'm Missing???
-Is the SSIS package writing all the large table in a Buffer and the writing it on Disk? Or why the BIG difference in time ?
-Is the index affecting the insertion time?
-Should I load the file to this new table as a temporary table and then do a BULK INSERT to the destination table with the records ordered? 'Cause I though that the Data FLow Task was much faster than BULK INSERT, but at this point I don't know now.
Greetings in advance.
One thing I might look at is if the large table has any triggers which are causing it to be slower on insert. Also if the clustered index is on a field that will require a good bit of rearranging of the data during the load, that could cause an issues as well.
In SSIS packages, using a merge join (which requires sorting) can cause slownesss, but from your description it doesn't appear you did that. I mention it only in case you were doing that and didn't mention it.
If it works fine without the indexes, perhaps you should look into those. What are the data types? How many are there? Maybe you could post their definitions?
You could also take a look at the fill factor of your indexes - especially the clustered index. Having a high fill factor could cause excessive IO on your inserts.
Well I Rebuild the indexes with another fill factor (80%) like Sam told me, and the time droped down significantly. It took 30 minutes instead of almost 3hours!!!
I will keep with the tests to fine tune the DB. Also I didnt have to create a clustered index,I guess with the clustered the time will drop a lot more.
Thanks to all, wish that this helps to someone in the same situation.