We are trying to migrate Sonarqube 6.5 from EC2 to Kubernetes and our database is in AWS RDS.
Steps I followed:
1) Launched sonarqube pod 6.7 with an empty DB(e.g sonark8s).
2) Backed up and restored existing Prod DB to new DB (sonark8s).
3) Restarted the pod and executed and then upgrade.
But, getting error 'Upgrade Failed: Database connection cannot be established. Please check database status and JDBC settings.'
web.log error:
2019.01.08 12:20:42 ERROR web[][DbMigrations] #1801 'Create table CE task characteristics': failure | time=18ms
2019.01.08 12:20:42 ERROR web[][DbMigrations] Executed DB migrations: failure | time=20ms
2019.01.08 12:20:42 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration failed | time=64ms
2019.01.08 12:20:42 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration ended with an exception
org.sonar.server.platform.db.migration.step.MigrationStepExecutionException: Execution of migration step #1801 'Create table CE task characteristics' failed
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:79)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:67)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEachOrdered(ReferencePipeline.java:590)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:52)
at org.sonar.server.platform.db.migration.engine.MigrationEngineImpl.execute(MigrationEngineImpl.java:50)
at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doUpgradeDb(DatabaseMigrationImpl.java:105)
at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doDatabaseMigration(DatabaseMigrationImpl.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Fail to execute CREATE TABLE ce_task_characteristics (uuid VARCHAR (40) NOT NULL,task_uuid VARCHAR (40) NOT NULL,kee VARCHAR (512) NOT NULL,text_value VARCHAR (512) NULL, CONSTRAINT pk_ce_task_characteristics PRIMARY KEY (uuid)) ENGINE=InnoDB CHARACTER SET utf8 COLLATE utf8_bin
at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:97)
at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:77)
at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:117)
at org.sonar.server.platform.db.migration.version.v66.CreateTableCeTaskCharacteristics.execute(CreateTableCeTaskCharacteristics.java:67)
at org.sonar.server.platform.db.migration.step.DdlChange.execute(DdlChange.java:45)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:75)
... 11 common frames omitted
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'ce_task_characteristics' already exists
Related
After running startSonar.bat I encountered the following error:
ERROR web[][o.s.s.p.PlatformImpl] Web server startup failed
java.lang.IllegalStateException: Failed to create table schema_migrations
.
.
.
Caused by:
org.h2.jdbc.JdbcSQLSyntaxErrorException: Syntax error in SQL statement "CREATE TABLE SCHEMA_MIGRATIONS (VERSION VARCHAR (۲۵۵[]) NOT NULL) "; expected "long"; SQL statement:
CREATE TABLE schema_migrations (version VARCHAR (۲۵۵) NOT NULL) [42001-199]*
We are getting the below error when restarting the sonarqube 6.7.2, can someone please advise. Thank you.
2018.04.04 10:31:44 ERROR web[][DbMigrations] #1619 'Restore 'sonar-users' group': failure | time=8ms
2018.04.04 10:31:44 ERROR web[][DbMigrations] Executed DB migrations: failure | time=9ms
2018.04.04 10:31:44 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration failed | time=56ms
2018.04.04 10:31:44 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration ended with an exception
org.sonar.server.platform.db.migration.step.MigrationStepExecutionException: Execution of migration step #1619 'Restore 'sonar-users' group' failed
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:79)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:67)
We are running setup to update our SonarQube to version 7.0 - we get a database failure (see stack trace below).
Any idea how we can get past this?
2018.02.07 07:16:47 INFO web[][o.s.s.p.d.m.DatabaseMigrationImpl] Starting DB Migration and container restart
2018.02.07 07:16:47 INFO web[][DbMigrations] Executing DB migrations...
2018.02.07 07:16:47 INFO web[][DbMigrations] #1907 'Populate table live_measures'...
2018.02.07 07:16:48 ERROR web[][DbMigrations] #1907 'Populate table live_measures': failure | time=788ms
2018.02.07 07:16:48 ERROR web[][DbMigrations] Executed DB migrations: failure | time=790ms
2018.02.07 07:16:48 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration failed | time=902ms
2018.02.07 07:16:48 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration ended with an exception
org.sonar.server.platform.db.migration.step.MigrationStepExecutionException: Execution of migration step #1907 'Populate table live_measures' failed
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:79)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:67)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEachOrdered(ReferencePipeline.java:590)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:52)
at org.sonar.server.platform.db.migration.engine.MigrationEngineImpl.execute(MigrationEngineImpl.java:50)
at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doUpgradeDb(DatabaseMigrationImpl.java:105)
at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doDatabaseMigration(DatabaseMigrationImpl.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Error during processing of row: [uuid=eea5cd4b-3c1c-4001-bf83-85c1062a1b7c,project_uuid=3dabb938-1a4a-4c82-b0a7-0b20cc419be9,metric_id=10019,value=1,text_value=null,variation_value_1=0,measure_data=null]
at org.sonar.server.platform.db.migration.step.SelectImpl.newExceptionWithRowDetails(SelectImpl.java:89)
at org.sonar.server.platform.db.migration.step.SelectImpl.scroll(SelectImpl.java:81)
at org.sonar.server.platform.db.migration.step.MassUpdate.execute(MassUpdate.java:91)
at org.sonar.server.platform.db.migration.version.v70.PopulateLiveMeasures.execute(PopulateLiveMeasures.java:57)
at org.sonar.server.platform.db.migration.step.DataChange.execute(DataChange.java:44)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:75)
... 11 common frames omitted
Caused by: java.sql.BatchUpdateException: ORA-00001: unique constraint (SONARQUBE_IDM.LIVE_MEASURES_COMPONENT) violated
at oracle.jdbc.driver.OraclePreparedStatement.executeLargeBatch(OraclePreparedStatement.java:10032)
at oracle.jdbc.driver.T4CPreparedStatement.executeLargeBatch(T4CPreparedStatement.java:1364)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9839)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:234)
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
at org.sonar.server.platform.db.migration.step.UpsertImpl.addBatch(UpsertImpl.java:42)
at org.sonar.server.platform.db.migration.step.MassUpdate.callSingleHandler(MassUpdate.java:118)
at org.sonar.server.platform.db.migration.step.MassUpdate.lambda$execute$0(MassUpdate.java:91)
at org.sonar.server.platform.db.migration.step.SelectImpl.scroll(SelectImpl.java:78)
... 15 common frames omitted
we had the same issue.
Execution of migration step #1907 'Populate table live_measures'
failed;[...]ERROR: duplicate key value violates unique constraint
"live_measures_component
I checked the entries in our DB that are causing the issue with this query (we use PostgreSQL, so you have to check if the query syntax is still valid for Oracle):
SELECT p.uuid, pm.metric_id, COUNT(1) FROM project_measures pm INNER JOIN projects p on p.uuid = pm.component_uuid INNER JOIN snapshots s on s.uuid = pm.analysis_uuid WHERE s.islast = TRUE and pm.person_id is null GROUP BY p.uuid, pm.metric_id HAVING COUNT(1) > 1;
There were > 3.500 (!) entries with the same uuid and metric_id, so no chance to manually adjust some table entries.
As we did not have enough time to analyze this further and we wanted to get past this we decided to delete and recreate the index "live_measures_component" without the UNIQUE key on the table live_measures.
The following statements should work for you as well: (with large databases the duration of these statements should be taken into consideration...)
DROP INDEX "live_measures_component";
CREATE INDEX live_measures_component ON live_measures (component_uuid,metric_id);
This workaround allowed us to finish the database migration. I don't know if the workaround has some side-effects (maybe somebody from sonarqube can tell) - but with having > 3.500 "problematic" entries in the DB it was our only possiblity at the moment.
Hope this helps.
(no rep, can't comment): the previous answer by guenther-s initially worked, but caused our analysis to later fail with sonar 9.7:
org.postgresql.util.PSQLException: ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
when inserting into the live_measures table. we fixed ours by dropping the index, removing the duplicates and re-adding a unique index after that:
DROP INDEX "live_measures_component";
DELETE FROM
live_measures a
USING live_measures b
WHERE
a.updated_at < b.updated_at
AND (a.component_uuid = b.component_uuid AND a.metric_uuid = b.metric_uuid);
CREATE UNIQUE INDEX live_measures_component ON live_measures (component_uuid,metric_uuid);
I am getting the following Exception when I'm trying to List the Sqoop JOBS.
I'm not able to create the Soop jobs because of this exception:
root#ubuntu:/usr/lib/sqoop/conf# sqoop job --list 16/04/11 01:51:44
ERROR tool.JobTool: I/O error performing job operation:
java.io.IOException: Exception creating SQL connection at
com.cloudera.sqoop.metastore.hsqldb.HsqldbJobStorage.init(HsqldbJobStorage.java:220)
at
com.cloudera.sqoop.metastore.hsqldb.AutoHsqldbStorage.open(AutoHsqldbStorage.java:113)
at com.cloudera.sqoop.tool.JobTool.run(JobTool.java:279) at
com.cloudera.sqoop.Sqoop.run(Sqoop.java:146) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at
com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:182) at
com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:221) at
com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:230) at
com.cloudera.sqoop.Sqoop.main(Sqoop.java:239) Caused by:
Caused by: java.sql.SQLException: General error: java.lang.ClassFormatError: >Truncated class file
at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
at org.hsqldb.jdbc.jdbcConnection.(Unknown Source)
at org.hsqldb.jdbcDriver.getConnection(Unknown Source)
at org.hsqldb.jdbcDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at com.cloudera.sqoop.metastore.hsqldb.HsqldbJobStorage.init(HsqldbJobStorage.java:>180)
... 8 more
Sqoop Version: 1.3.0-cdh3u5
Please help
Commands used as below:
sqoop job --list
sqoop job --create sqoopjob21 -- import --connect jdbc:mysql://localhost/mysql1 --table emp --target-dir /importjob21 ;
This may possibly be because sqoop cannot find the hsqldb which it uses to store job information. Check if you have "metastore.db.script" file in the sqoop installed directory, if not create it.
Create a file named "metastore.db.script" and put the following lines
CREATE SCHEMA PUBLIC AUTHORIZATION DBA
CREATE MEMORY TABLE SQOOP_ROOT(VERSION INTEGER,PROPNAME VARCHAR(128) NOT NULL,PROPVAL VARCHAR(256),CONSTRAINT SQOOP_ROOT_UNQ UNIQUE(VERSION,PROPNAME))
CREATE MEMORY TABLE SQOOP_SESSIONS(JOB_NAME VARCHAR(64) NOT NULL,PROPNAME VARCHAR(128) NOT NULL,PROPVAL VARCHAR(1024),PROPCLASS VARCHAR(32) NOT NULL,CONSTRAINT SQOOP_SESSIONS_UNQ UNIQUE(JOB_NAME,PROPNAME,PROPCLASS))
CREATE USER SA PASSWORD ""
GRANT DBA TO SA
SET WRITE_DELAY 10
SET SCHEMA PUBLIC
INSERT INTO SQOOP_ROOT VALUES(NULL,'sqoop.hsqldb.job.storage.version','0')
INSERT INTO SQOOP_ROOT VALUES(0,'sqoop.hsqldb.job.info.table','SQOOP_SESSIONS')
Now create "metastore.db.properties" file and put these lines
#HSQL Database Engine 1.8.0.10
#Fri Aug 04 14:07:10 IST 2017
hsqldb.script_format=0
runtime.gc_interval=0
sql.enforce_strict_size=false
hsqldb.cache_size_scale=8
readonly=false
hsqldb.nio_data_file=true
hsqldb.cache_scale=14
version=1.8.0
hsqldb.default_table_type=memory
hsqldb.cache_file_scale=1
hsqldb.log_size=200
modified=no
hsqldb.cache_version=1.7.0
hsqldb.original_version=1.8.0
hsqldb.compatible_version=1.8.0
Now create a directory named ".sqoop" if not already created and put these two files there. Now run your job.
I am using Hadoop-2.6.0 secured with kerberos. Installed hive server2 1.1.0 version with derby database as connectionurl, enabled security and enabled Authorization. When enabling transaction configuration, I am getting the below exception and cannot execute any queries;
Exception
Error: Error while compiling statement: FAILED: LockException [Error 10280]: Error communicating with the metastore (state=42000,code=10280)
Logs
[Error 10280]: Error communicating with the metastore
org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the metastore
at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidTxns(DbTxnManager.java:300)
at org.apache.hadoop.hive.ql.Driver.recordValidTxns(Driver.java:927)
Caused by: MetaException(message:Unable to select from transaction database, java.sql.SQLSyntaxErrorException: Table/View 'TXNS' does not exist.
So i have created a below property in hive-site.xml file as mentioned in a blog here
Configuration
<property>
<name>hive.in.test</name>
<value>true</value>
</property>
If i set the above property then getting the below exception where i am struck and unable to solve it. I cannot run any query even use mydb;
Exception
Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000)
Logs
Error executing statement:
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: NullPointerException null
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:103)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hive.metastore.txn.TxnHandler.checkQFileTestHack(TxnHandler.java:1146)
at org.apache.hadoop.hive.metastore.txn.TxnHandler.(TxnHandler.java:117)
I need a solution to work ACID transactions in Hive Server2. I found two related questions but not solved my issue.
hive 0.14 update and delete queries configuration error
Hive Transactions are crashing
Upgrade your hive mysql metastore db with hive-txn-schema-0.14.0.mysql.sql as follows..
mysql> SOURCE /usr/local/hadoop/hive/scripts/metastore/upgrade/mysql/hive-txn-schema-0.14.0.mysql.sql;