I am trying to upgrade SonarQube v5.1 to v5.2 and it fails with below error:
ERROR web[o.s.s.d.m.DatabaseMigrator] Fail to execute database migration: org.sonar.db.version.v52.RemoveDuplicatedComponentKeys
java.lang.IllegalStateException: Error during processing of row:..................................................................
Caused by: java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic#7f872fa8 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
2015.11.05 09:08:32 INFO web[o.s.s.d.m.PlatformDatabaseMigration] DB migration failed | time=6911ms
2015.11.05 09:08:32 ERROR web[o.s.s.d.m.PlatformDatabaseMigration] DB Migration or container restart failed. Process ended with an exception
org.jruby.exceptions.RaiseException: (StandardError) An error has occurred, all later migrations canceled:
Fail to execute database migration: org.sonar.db.version.v52.RemoveDuplicatedComponentKeys
I found a workaround. I've deleted the duplicated projects from the projects table and then restarted the migration process.
To get an idea what is going on, execute the following query on your database:
select p.kee, COUNT(p.kee) FROM projects p GROUP BY p.kee HAVING COUNT(p.kee) > 1;
If you this query returns any elements you have to remove duplicated ones (e.g. the oldest ones). In my case it was easy because I had no isseues in the issue table related to projects in question.
I you have to mimic the SonarQube 5.2 migration procedure step by step (deleting duplicated projects and updating issues for them) you can find a list of queries being executed during the migration step on Github.
Related
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 76.0 failed 4 times, most recent failure: Lost task 5.3 in stage 76.0 (TID 2334) (10.139.64.5 executor 6): com.databricks.sql.io.FileReadException: Error while reading file <File_Path> It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. If Delta cache is stale or the underlying files have been removed, you can invalidate Delta cache manually by restarting the cluster.
Additionally to what the answer by
AbhishekKhandave-MT suggests what you can try is explicitly repairing the table:
FSCK REPAIR TABLE delta.`path/to/delta`
This also fixes scenarios where the underlying files of the table have actually been changed without it being reflected in the "_delta_log" transaction log.
There are two ways you can try for this error –
Refresh table
Invalidates the cached entries for Apache Spark cache, which include data and metadata of the given table or view. The invalidated cache is populated in lazy manner when the cached table or the query associated with it is executed again.
REFRESH [TABLE] table_name
Manually restart the cluster.
I use datagrip to move some data from a mysql installation to another postresql-database.
That worked for 3 other tables like a charm. The next one, over 500.000 rows big, could not be imported.
I use the function "Copy Table To... (F5)".
This is the log.
16:28 Connected
16:30 user#localhost: tmp_post imported to forum_post: 1999 rows (1m
58s 206ms)
16:30 Can't save current transaction state. Check connection and
database settings and try again.
For other errors like wrong data types, null data on not null columns, a very helpful log is created. But not now.
The problem is also relevant when using the database plugin for IntelliJ-based IDEs, not only DataGrip
The simplest way to solve the issue is just to add "prepareThreshold=0" to your connection string as in this answer:
jdbc:postgresql://ip:port/db_name?prepareThreshold=0
Or, for example, if you a using several settings in the connection string:
jdbc:postgresql://hostmaster.com:6432,hostsecond.com:6432/dbName?&targetServerType=master&prepareThreshold=0
It's a well-known problem when connecting to the PostgreSQL server via PgBouncer rather than a problem with IntelliJ itself. When loading massive data to the database IntelliJ splits data into chunks and loads them sequentially, each time executing the query and committing the data. By default, PostgreSQL starts using server-side prepared statements after 5 execution of a query.
The driver uses server side prepared statements by default when
PreparedStatement API is used. In order to get to server-side prepare,
you need to execute the query 5 times (that can be configured via
prepareThreshold connection property). An internal counter keeps track
of how many times the statement has been executed and when it reaches
the threshold it will start to use server side prepared statements.
Probably your PgBouncer runs with transaction pooling and the latest version of PbBouncer doesn't support prepared statements with transaction pooling.
How to use prepared statements with transaction pooling?
To make prepared statements work in this mode would need PgBouncer to
keep track of them internally, which it does not do. So the only way
to keep using PgBouncer in this mode is to disable prepared statements
in the client
You can verify that the issue is indeed because of the incorrect use of prepared statements with the pgbouncer via viewing IntelliJ log files. For that go to Help -> Show Log in Explorer, and search for "org.postgresql.util.PSQLException: ERROR: prepared statement" exception.
2022-04-08 12:32:56,484 [693272684] WARN - j.database.dbimport.ImportHead - ERROR: prepared statement "S_3649" does not exist
java.sql.SQLException: ERROR: prepared statement "S_3649" does not exist
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.jdbc.PgConnection.executeTransactionCommand(PgConnection.java:755)
at org.postgresql.jdbc.PgConnection.commit(PgConnection.java:777)
I'm running a SQL query, using Oracle sql developer, joining 4 tables on an AWS machine. Database is Oracle11g2.
Simple queries work fine but this specific query fails to run with the following error:
12805. 00000 - "parallel query server died unexpectedly"
*Cause: A parallel query server died unexpectedly, PMON cleaning
up the process.
*Action: Check your system for anomalies and reissue the statement.
If this error persists, contact Oracle Support Services.
See trace file for more details.
The error is vaguely described in search results and no concrete solution has worked. Would be great to get help with this one.
After installing SONAR 3.7.3 I received the following error on startup: "o.s.s.p.DatabaseServerCompatibility Database must be upgraded. Please browse /setup"
I then followed instructions to upgrade the databse by navigating to http://:/setup. However, when I click update databse I get the following error
The migration failed: An error has occurred, all later migrations canceled: ActiveRecord::JDBCError: ORA-01430: column being added already exists in table : ALTER TABLE reviews ADD manual_severity NUMBER(1).
Please check the logs.
I can see, as the message suggests that the table reviews already exist and it already contains the column manual_severity, so I'm not sure why its trying to re-add it.
Any ideas?
I would use a backup of the database schema and start the process again from the beginning.
If you don't have a backup, I cannot see any other way than to delete the column in your schema and try to migrate the database again, but you might find a lot of errors of this kind and have to repeat this operation some times. Also, you will never be sure that your DB is Ok.
In some respects, similar to How to handle Oracle synonyms with Flyway 2.0.1?. Our controlling Security & DBA groups have decreed one Oracle schema for the objects (tables, views, sp) {owner} and one schema for accessing via synonyms {user}. Most apps could probably use the switch context, but apart from the aforementioned policy blindsiding me, there is some legacy PL/SQL usage in amongst the mostly Java JPA access. Owner does not have any access to user schema. User can create synonyms.
Not yet looked at writing custom Java code - trying to do off the mvn command line out of the box.
So, I have run Flyway 2.2 init() against Owner schema and then immediately created a copy called SCHEMA_VERSION_USER. (Upper case to avoid Oracle (10g) causing issues with synonym.)
Manually created a synonym SCHEMA_VERSION_USER to Owner.SCHEMA_VERSION_USER.
Executed
mvn compile flyway:migrate -Dflyway.user=USER -Dflyway.table=SCHEMA_VERSION_USER
but received
[INFO] Upgrading the metadata table "USER"."SCHEMA_VERSION_USER" to the Flyway 2.0 format...
[INFO] Checking prerequisites...
[ERROR] com.googlecode.flyway.core.api.FlywayException: Unable to upgrade the metadata table "USER"."SCHEMA_
VERSION_USER" to the Flyway 2.0 format
[ERROR] Caused by java.sql.SQLException: ORA-00904: "DESCRIPTION": invalid identifier
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Flyway Error: com.googlecode.flyway.core.api.FlywayException: Unable to upgrade the metadata table "USER"."SCHEMA_VERSION_USER" to the Flyway 2.0 format
ORA-00904: "DESCRIPTION": invalid identifier
When I tried qualifying the table as in -Dflyway.table=USER.SCHEMA_VERSION_USER it fails with
[ERROR] com.googlecode.flyway.core.api.FlywayException: Found non-empty schema "USER" without metadata table! Use init() first to initialize the metadata table.
Is this a defect; deliberate design; not thought of; backlog? Do I need to have the DBA execute the everything as system dba to avoid permissioning issues? Prefer not to as it kind of locks us into using them for all environments not just prod and we are trying to introduce 'automated' continuous delivery. Or maybe just have them manually create the schema_version table in the User schema to avoid the initial upgrade check? Do I have to write java to bypass the initial upgrade check?
Post note: We are trying to get them to embrace some modern DB automation tools - they have previously only accepted custom handrolled control scripts that are a maintenance nightmare.
Thanks!
This is a scenario that had not come up so far. The upgrade code assumes it is a table, and wants to ensure a smooth migration from Flyway 1.x. It cannot be bypassed at this point.
Creating the table in the user's schema should solve the issue. You can specify the schema where the table will be created by using the flyway.schemas property. The table will be created in the first one of the list.
This is a problem where I am working as well. We have a MD and APP user. The APP user cannot create tables and the MD user is not allowed to have CREATE ANY TABLE. So we really need the APP user's FlywayDb version table to be a synonym of the MD user's tables. Anyone done any work on this since this question was asked 2 years ago?
Use -Dflyway.schemas=OWNER while you run the comand.