Error querying database. Cause: java.lang.NullPointerException: QProfile is missing - sonarqube

I'm trying to restart SonarQube and ran into an issue with the QProfile is missing but I'm not sure what to do to address this.
2015.07.06 12:50:31 ERROR [o.s.s.p.PlatformServletContextListener] Fail to start server
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: java.lang.NullPointerException: QProfile is missing
### The error may exist in org.sonar.core.qualityprofile.db.ActiveRuleMapper
### The error may involve org.sonar.core.qualityprofile.db.ActiveRuleMapper.selectAllKeysAfterTimestamp-Inline
### The error occurred while setting parameters
### SQL: SELECT r.plugin_rule_key as "rulefield", r.plugin_name as "repository", qp.kee as "profileKey" FROM active_rules a LEFT JOIN rules_profiles qp ON qp.id=a.profile_id LEFT JOIN rules r ON r.id = a.rule_id WHERE a.updated_at IS NULL or a.updated_at >= ?
### Cause: java.lang.NullPointerException: QProfile is missing

Related

Hive SQL error: Failed rule ‘identifier’ in the Select target

I wrote a hive sql query here:
SELECT
dt,
COUNT(CASE WHEN search_word like ‘%A%’ THEN id END) AS a,
COUNT(CASE WHEN search_word like ‘%B%’ THEN id END) AS b,
FROM database
GROUP BY dt
However, Hive returns an error :
Error while compiling statement : Failed
ParseExceptionline 3:7 Failed to recognise predicate ‘AS’. Failed rule: ‘identifier’ in the select target.
I searched this error and my assumption is it might be come from AS reserve word. But I still do not understand how to fix it.

Group By Statement within HQL

I'm doing a simple pull from an HQL database. Gives me an error only when I attempt to incorporate the GROUP BY statement
select
element1
,element2
,max(element3) as maxelement3
from HQLdb.HQLtable
group by element1,element2
Error message that I receive:
SELECT Failed. 35: (35) Error from server: error code: '1' error
message: 'Error while processing statement: FAILED: Execution Error,
return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask

Kafka Connect with CockroachDB

I am trying to use CockroachDB (v2.0.6) as a sink for one of my Kafka topics.
I wasn't able to find any Kafka connector specifically for CockroachDB so I decided to use the jdbc sink connector from Confluent since CockroachDB supports the postgreSQL syntax.
The connection string that I use on Kafka Connect is the following
"connection.url": "jdbc:postgresql://roach1:26257/mydb?sslmode=disable"
which basically is the only thing I changed on an existing working Postgres sink connector.
Unfortunately I was unable to make it work since the connector fails with an
error
Caused by: org.apache.kafka.connect.errors.ConnectException: java.sql.SQLException: org.postgresql.util.PSQLException: ERROR: syntax error at or near "."
Detail: source SQL:
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, ct.relname AS TABLE_NAME, a.attname AS COLUMN_NAME, (i.keys).n AS KEY_SEQ, ci.relname AS PK_NAME FROM pg_catalog.pg_class ct JOIN pg_catalog.pg_attribute a ON (ct.oid = a.attrelid) JOIN pg_catalog.pg_namespace n ON (ct.relnamespace = n.oid) JOIN (SELECT i.indexrelid, i.indrelid, i.indisprimary, information_schema._pg_expandarray(i.indkey) AS keys FROM pg_catalog.pg_index i) i ON (a.attnum = (i.keys).x AND a.attrelid = i.indrelid) JOIN pg_catalog.pg_class ci ON (ci.oid = i.indexrelid) WHERE true AND ct.relname = 'my_topic' AND i.indisprimary ORDER BY table_name, pk_name, key_seq
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:88)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
... 10 more
Caused by: java.sql.SQLException: org.postgresql.util.PSQLException: ERROR: syntax error at or near "."
Detail: source SQL:
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, ct.relname AS TABLE_NAME, a.attname AS COLUMN_NAME, (i.keys).n AS KEY_SEQ, ci.relname AS PK_NAME FROM pg_catalog.pg_class ct JOIN pg_catalog.pg_attribute a ON (ct.oid = a.attrelid) JOIN pg_catalog.pg_namespace n ON (ct.relnamespace = n.oid) JOIN (SELECT i.indexrelid, i.indrelid, i.indisprimary, information_schema._pg_expandarray(i.indkey) AS keys FROM pg_catalog.pg_index i) i ON (a.attnum = (i.keys).x AND a.attrelid = i.indrelid) JOIN pg_catalog.pg_class ci ON (ci.oid = i.indexrelid) WHERE true AND ct.relname = 'collect_flow_tracking' AND i.indisprimary ORDER BY table_name, pk_name, key_seq
So my question is, has anyone used Kafka Connect with CockroachDB successfully ?
Also does anyone have any pointers on this error (what causes it) and how to circumvent it and make this work ?
CockroachDB PM here. It looks like the problem is an unsupported database introspection query performed by the Kafka Connect Postgres connector. The good news is that this particular query does appear to be supported by CockroachDB 2.1. Can you try again using the latest CockroachDB beta?

NamedParameterJdbcTemplate batchUpdate not working for merge

I am running the below merge query using NamedParameterJdbcTemplate batchUpdate.
The batchUpdate(String sql, SqlParameterSource[] batchArgs) works for batchArgs with 1 item and does not work for multiple items. Below is the query and I renamed table column names.
The NamedParameterJdbcTemplate.batchUpdate works if run for each array item seperately or if run as NamedParameterJdbcTemplate .update()
Sql Query:
MERGE INTO tableName C1 USING (VALUES (:f1)) AS C2(Field1)
ON (C1.CMP = C2.CO_C)
WHEN MATCHED THEN
UPDATE SET Field2 = :date1, Field3 = :time1, Field4 =:userId
WHEN NOT MATCHED THEN
INSERT (Field5, Field1, Field4, Field2, Field3) VALUES (:offCC, :f1,:userId, :date1, :time1)
[err] SQL exception:
[err] Message: [jcc][t4][102][10040][4.22.29] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch. Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null
[err] SQLSTATE: null
[err] Error code: -4229
[err] SQL exception:
[err] Message: VARIABLE IS NOT DEFINED OR NOT USABLE.
SQLCODE=-312, SQLSTATE=42618, DRIVER=4.22.29
[err] SQLSTATE: 42618
[err] Error code: -312
[err] SQL exception:
[err] Message: Error for batch element #1: VARIABLE IS NOT DEFINED OR NOT USABLE
[err] SQLSTATE: 42618
[err] Error code: -312

Strange error when use hive udf through jdbc client

all. I met a strange error when I use hive udf through jdbc client.
I have a udf to help me convert a string into time stamp format called reformat_date. I firstly execute ADD JAR and CREATE TEMPORARY FUNCTION, both work fine.
The SQL also can be explained in hive cli mode, and can be executed. But when use jdbc client, I got errors:
Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:283 Wrong arguments ''20121201000000'':
org.apache.hadoop.hive.ql.metadata.HiveException:
Unable to execute method public org.apache.hadoop.io.Text com.aa.datawarehouse.hive.udf.ReformatDate.evaluate(org.apache.hadoop.io.Text) on object com.aa.datawarehouse.hive.udf.ReformatDate#4557e3e8 of class com.aa.datawarehouse.hive.udf.ReformatDate with arguments {20121201000000:org.apache.hadoop.io.Text} of size 1:
at com.aa.statistic.dal.impl.TjLoginDalImpl.selectAwakenedUserCount(TjLoginDalImpl.java:258)
at com.aa.statistic.backtask.service.impl.UserBehaviorAnalysisServiceImpl.recordAwakenedUser(UserBehaviorAnalysisServiceImpl.java:326)
at com.aa.statistic.backtask.controller.BackstatisticController$21.execute(BackstatisticController.java:773)
at com.aa.statistic.backtask.controller.BackstatisticController$DailyExecutor.execute(BackstatisticController.java:823)
My SQL is
select count(distinct a.user_id) as cnt from ( select user_id, user_kind, login_date, login_time from tj_login_hive where p_month = '2012_12' and login_date = '20121201' and user_kind = '0' ) a join ( select user_id from tj_login_hive where p_month <= '2012_12' and datediff(to_date(reformat_date(concat('20121201', '000000'))), to_date(reformat_date(concat(login_date, '000000')))) >= 90 ) b on a.user_id = b.user_id
Thanks.
i think your udf threw exception.
if reformat_date function is that you make, you should check your logic.
if not, you should check the udf's specification.

Resources