Sonar 5.6 : cannot update issue - sonarqube

I try to update an issue on the UI (assign/set severity/open). But nothing append.
When i see the network exchange i have a 404:
{"errors":[{"msg":"Issue with key '76b53a17-fa8f-4d04-b999-1fd5e401fee0' does not exist"}]}
But i found my issue in my database (mysql):
mysql> select kee from issues where kee='76b53a17-fa8f-4d04-b999-1fd5e401fee0';
+--------------------------------------+
| kee |
+--------------------------------------+
| 76b53a17-fa8f-4d04-b999-1fd5e401fee0 |
+--------------------------------------+
1 row in set (0.00 sec)
We try to find the query executed my sonar. We just find it on the head version of sonar (IssueFinder and IBatis config) and it's work:
select i.id,
i.kee as kee,
i.rule_id as ruleId,
i.severity as severity,
i.manual_severity as manualSeverity,
i.message as message,
i.line as line,
i.locations as locations,
i.gap as gap,
i.effort as effort,
i.status as status,
i.resolution as resolution,
i.checksum as checksum,
i.assignee as assignee,
i.author_login as authorLogin,
i.tags as tagsString,
i.issue_attributes as issueAttributes,
i.issue_creation_date as issueCreationTime,
i.issue_update_date as issueUpdateTime,
i.issue_close_date as issueCloseTime,
i.created_at as createdAt,
i.updated_at as updatedAt,
r.plugin_rule_key as ruleKey,
r.plugin_name as ruleRepo,
r.language as language,
p.kee as componentKey,
i.component_uuid as componentUuid,
p.module_uuid as moduleUuid,
p.module_uuid_path as moduleUuidPath,
p.path as filePath,
root.kee as projectKey,
i.project_uuid as projectUuid,
i.issue_type as type
from issues i
inner join rules r on r.id=i.rule_id
inner join projects p on p.uuid=i.component_uuid
inner join projects root on root.uuid=i.project_uuid
where i.kee='76b53a17-fa8f-4d04-b999-1fd5e401fee0';
it's return one row.
What can i do? Is it a bug?

The ES folder is probably corrupted. Here are the steps to clean it up :
Stop the SonarQube server
Remove the {SONARQUBE_INSTALLATION}/data/es folder
Restart the server

Related

Apache Kylin and Sqoop - Is there a way to edit the Sqoop generated SQL statement?

I am working with Apache Kylin and using Sqoop to connect to my PostgreSQL database. I have a cube created based on a fact table that references the same dimensional table twice. So the problem arises when I try to build the cube, I get the following error on the first step of the job (#1 Step Name: Sqoop To Flat Hive Table):
ERROR manager.SqlManager: Error executing statement: org.postgresql.util.PSQLException: ERROR: table name "d_date" specified more than once
The problem is that Sqoop generates SQL and references the table d_date twice and gives it the same alias both times so the SQL statement fails... Can I configure it in any way to fix this issue?
Edit: If the answer is no, that is also helpful, I just really need to know whether there is anything I can do to fix this...
This is the generated SQL (bold is where the problem is):
SELECT f_exam.course_id as F_EXAM_COURSE_ID ,f_exam.academic_year_id as F_EXAM_ACADEMIC_YEAR_ID ,f_exam.semester_id as F_EXAM_SEMESTER_ID ,f_exam.exam_id as F_EXAM_EXAM_ID ,f_exam.exam_app_user_created_id as F_EXAM_EXAM_APP_USER_CREATED_ID ,f_exam.exam_available_from_date_id ,f_exam.exam_available_from_time_id as F_EXAM_EXAM_AVAILABLE_FROM_TIME_ID ,f_exam.exam_available_to_date_id as F_EXAM_EXAM_AVAILABLE_TO_DATE_ID ,f_exam.exam_available_to_time_id as F_EXAM_EXAM_AVAILABLE_TO_TIME_ID ,f_exam.exam_ordinal_id as F_EXAM_EXAM_ORDINAL_ID ,d_time_day.time_day_id as D_AVAILABLE_FROM_TIME_TIME_DAY_ID ,d_time_day.hour_minutes_seconds as D_AVAILABLE_FROM_TIME_HOUR_MINUTES_SECONDS ,d_time_day.the_seconds as D_AVAILABLE_FROM_TIME_THE_SECONDS ,d_time_day.the_minutes as D_AVAILABLE_FROM_TIME_THE_MINUTES ,d_time_day.the_hours as D_AVAILABLE_FROM_TIME_THE_HOURS ,d_time_day.period_of_day as D_AVAILABLE_FROM_TIME_PERIOD_OF_DAY ,d_time_day.time_day_id as D_AVAILABLE_TO_TIME_TIME_DAY_ID ,d_time_day.hour_minutes_seconds as D_AVAILABLE_TO_TIME_HOUR_MINUTES_SECONDS ,d_time_day.the_seconds as D_AVAILABLE_TO_TIME_THE_SECONDS ,d_time_day.the_minutes as D_AVAILABLE_TO_TIME_THE_MINUTES ,d_time_day.the_hours as D_AVAILABLE_TO_TIME_THE_HOURS ,d_time_day.period_of_day as D_AVAILABLE_TO_TIME_PERIOD_OF_DAY ,f_exam.number_of_questions as F_EXAM_NUMBER_OF_QUESTIONS ,f_exam.duration_in_seconds as F_EXAM_DURATION_IN_SECONDS ,f_exam.number_of_students_participated as F_EXAM_NUMBER_OF_STUDENTS_PARTICIPATED ,f_exam.is_forward_only_01 as F_EXAM_IS_FORWARD_ONLY_01 ,f_exam.max_score_possible as F_EXAM_MAX_SCORE_POSSIBLE ,f_exam.max_score as F_EXAM_MAX_SCORE ,f_exam.min_score as F_EXAM_MIN_SCORE ,f_exam.pass_percentage as F_EXAM_PASS_PERCENTAGE ,f_exam.max_score_percentage as F_EXAM_MAX_SCORE_PERCENTAGE ,f_exam.min_score_percentage as F_EXAM_MIN_SCORE_PERCENTAGE ,f_exam.avg_score as F_EXAM_AVG_SCORE ,f_exam.median as F_EXAM_MEDIAN ,f_exam.first_quartile as F_EXAM_FIRST_QUARTILE ,f_exam.third_quartile as F_EXAM_THIRD_QUARTILE ,f_exam.interquartile_range as F_EXAM_INTERQUARTILE_RANGE ,f_exam.minimum_without_outliers as F_EXAM_MINIMUM_WITHOUT_OUTLIERS ,f_exam.maximum_without_outliers as F_EXAM_MAXIMUM_WITHOUT_OUTLIERS FROM public.f_exam f_exam INNER JOIN public.d_course d_course ON f_exam.course_id = d_course.course_id INNER JOIN public.d_academic_year d_academic_year ON f_exam.academic_year_id = d_academic_year.academic_year_id INNER JOIN public.d_semester d_semester ON f_exam.semester_id = d_semester.semester_id INNER JOIN public.d_exam d_exam ON f_exam.exam_id = d_exam.exam_id INNER JOIN public.d_app_user d_app_user ON f_exam.exam_app_user_created_id = d_app_user.app_user_id INNER JOIN public.d_date d_date ON f_exam.exam_available_from_date_id = d_date.date_id INNER JOIN public.d_time_day d_time_day ON f_exam.exam_available_from_time_id = d_time_day.time_day_id INNER JOIN public.d_date d_date ON f_exam.exam_available_to_date_id = d_date.date_id INNER JOIN public.d_time_day d_time_day ON f_exam.exam_available_to_time_id = d_time_day.time_day_id INNER JOIN public.d_ordinal d_ordinal ON f_exam.exam_ordinal_id = d_ordinal.ordinal_id WHERE 1=1 AND (f_exam.exam_available_from_date_id >= 20120101 AND f_exam.exam_available_from_date_id < 20170101) AND (1 = 0)
The long long SQL is generated by Kylin and submitted to sqoop for execution. So you really need is to fix the duplicated alias of the two public.d_date d tables in Kylin model definition.
In Kylin model designer, around your fact table f_exam, there must be two public.d_date d. Set their alias to different names, save the model and try build again. This shall change the generated SQL and let the sqoop step pass.

Calendar agent - error: sql cached statement NSSQLiteStatement

So after spending quite a lot of time trying to fix my iCalendar app
from the Spinning beach ball lag every time I make a new event or edit an event,
I found this error in the console application:
"error: sql cached statement NSSQLiteStatement <0x7f8eef4c0fd0> on entity 'Group' with sql text 'SELECT t0.Z_ENT, t0.Z_PK, t0.Z_OPT, t0.ZCOLORSTRING, t0.ZISENABLED, ....... ( t0.Z_PK IN (SELECT * FROM _Z_intarray0) AND t0.Z_ENT >= ? AND t0.Z_ENT <= ?) ' failed due to missing variable binding for (null) with expecting bindings (
"<NSSQLBindVariable: 0x7f8eef47bdd0>",
"<NSSQLBindVariable: 0x7f8eef47be70>"
) but actual substitution variables {
objects = "{<NSManagedObject: 0x7f8eef540140> (entity: ExchangePrincipal; id: 0x240092b <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/ExchangePrincipal/p9> ; data: <fault>)}";
}
error: sql cached statement NSSQLiteStatement <0x7f8eef48daa0> on entity 'Attendee' with sql text 'SELECT 0, t0.Z_PK, t0.Z_OPT, t0.ZADDRESSSTRING, t0.ZCOMMONNAME, t0.ZEMAIL, t0.ZINCLUDEDINALLRESPONDED, t0.ZINVITERNAME, t0.ZISSELFINVITED, t0.ZLIKENESSDATASTRING, t0.ZOMITSYNCRECORD, t0.ZPROPOSALENDDATE, t0.ZPROPOSALSTARTDATE, t0.ZPROPOSALSTATUS, t0.ZROLE, t0.ZRSVP, t0.ZSCHEDULEAGENT, t0.ZSCHEDULEFORCESEND, t0.ZSCHEDULESTATUS, t0.ZSTATUS, t0.ZSTATUSMODIFIEDDATE, t0.ZTYPE, t0.ZEVENT, t0.ZMYATTENDEEFOREVENT FROM ZATTENDEE t0 WHERE t0.ZMYATTENDEEFOREVENT IN (SELECT * FROM _Z_intarray0) ' failed due to missing variable binding for (null) with expecting bindings (
) but actual substitution variables {
destinations = "{0x122c009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1163>, 0x1230009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1164>, 0x1234009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1165>, 0x123c009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1167>, 0x1258009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1174>, 0x1264009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1177>, 0x126c009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1179>, 0x127c009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1183>, 0x1280009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1184>, 0x1284009eb <x-coredata://93547915-498F-4251-8E7E-23DD04782C04/Event/p1185>}";
}"
There are around 8-10 of such errors each time I make a new event.
Can you please help me with this issue?
I already reinstalled mac os sierra few times,
but it made no difference.
What fixed it for me was a new clean install of the Mac OS.
Now I don't get the same issue (and Calendar is a pleasure to use),
but if the issue re-appears I will make an update.

Apache Drill 1.2 and Oracle JDBC

Using Apache Drill v1.2 and Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit in embedded mode.
I'm curious if anyone has had any success connecting Apache Drill to an Oracle DB. I've updated the drill-override.conf with the following configurations (per documents):
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "localhost:2181",
drill.exec.sys.store.provider.local.path = "/mypath"
}
and placed the ojdbc6.jar in \apache-drill-1.2.0\jars\3rdparty. I can successfully create the storage plug-in:
{
"type": "jdbc",
"driver": "oracle.jdbc.driver.OracleDriver",
"url": "jdbc:oracle:thin:#<IP>:<PORT>:<SID>",
"username": "USERNAME",
"password": "PASSWORD",
"enabled": true
}
but when I issue a query such as:
select * from <storage_name>.<schema_name>.`dual`;
I get the following error:
Query Failed: An Error Occurred
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: From line 1, column 15 to line 1, column 20: Table '<storage_name>.<schema_name>.dual' not found [Error Id: 57a4153c-6378-4026-b90c-9bb727e131ae on <computer_name>:<PORT>].
I've tried to query other schema/tables and get a similar result. I've also tried connecting to Teradata and get the same error. Does any one have suggestions/run into similar issues?
It's working with Drill 1.3 (released on 23-Dec-2015)
Plugin: name - oracle
{
"type": "jdbc",
"driver": "oracle.jdbc.driver.OracleDriver",
"url": "jdbc:oracle:thin:user/password#192.xxx.xxx.xxx:1521:orcl ",
"enabled": true
}
Query:
select * from <plugin-name>.<user-name>.<table-name>;
Example:
select * from oracle.USER.SAMPLE;
Check drill's documentation for more details.
Note: Make sure you added ojdbc7.12.1.0.2.jar(recommended in docs) in apache-drill-1.3.0/jars/3rdparty
It kind of works in Apache drill 1.3.
The strange thing is that I can only query the tables for which there are synonyms created...
In the command line try:
use <storage_name>;
show tables;
This will give you a list of objects that you can query - dual is not on that list ;-).
I'm using apache-drill-1.9.0 and it seems that the schema name is interpreted case sensitive and must be be therefore be in upper case.
For a table user1.my_tab (which is per default created in Oracle in upper case)
this works in Drill (plugin name is oracle)
SELECT * FROM oracle.USER1.my_tab;
But this triggers an error
SELECT * FROM oracle.user1.my_tab;
SEVERE: org.apache.calcite.sql.validate.SqlValidatorException: Table 'oracle.user1.my_tab' not found
An alternative approach is to set the plugin name and the schema name with use (owner must be upper case as well)
0: jdbc:drill:zk=local> use oracle.USER1;
+-------+-------------------------------------------+
| ok | summary |
+-------+-------------------------------------------+
| true | Default schema changed to [oracle.USER1] |
+-------+-------------------------------------------+
1 row selected (0,169 seconds)
0: jdbc:drill:zk=local> select * from my_tab;
+------+
| X |
+------+
| 1.0 |
| 1.0 |
+------+
2 rows selected (0,151 seconds)

Test the existence of a Teradata table and create the table if non-existent

Our Continuous Inegration server (Hudosn) is having a strange issue when attempting to run a simple create table statement in Teradata.
This statement tests the existence of the max_call table:
unless $teradata_connection.table_exists? :arm_custom_db__max_call_attempt_parameters
$teradata_connection.run('CREATE TABLE all_wkscratchpad_db.max_call_attempt_parameters AS (SELECT * FROM arm_custom_db.max_call_attempt_parameters ) WITH NO DATA')
end
The table_exists? method does the following:
def table_exists?(name)
v ||= false # only retry once
sch, table_name = schema_and_table(name)
name = SQL::QualifiedIdentifier.new(sch, table_name) if sch
from(name).first
true
rescue DatabaseError => e
if e.to_s =~ /Operation not allowed for reason code "7" on table/ && v == false
# table probably needs reorg
reorg(name)
v = true
retry
end
false
end
So as per the from(name).first line, the test which this method is performing is just a simple select statement, which, in SQL, looks like: SELECT TOP 1 MAX(CAST(MAX_CALL_ATTEMPT_CNT AS BIGINT)) FROM ALL_WKSCRATCHPAD_DB.MAX_CALL_ATTEMPT_PARAMETERS
The above SQL statement executes perfectly fine within Teradata SQL Assistant, so it's not a SQL syntax issue. The generic ID which our testing suite (Rubymine) uses is also not the issue; that ID has select access to the arm_custom_db.
The exeption which I can see is being thrown (within the builds console output on Hudson) is
Sequel::DatabaseError: Java::ComTeradataJdbcJdbc_4Util::JDBCException. Since this execption is a subclass of DatabaseError, the exception shouldn't be the problem either.
Also: We use unless statements like this every day for hundreds of different tables, and all except this one work correctly. This statement just seems to be a problem.
The complete error message which appears in the builds console output of Hudson is as follows:
[2015-01-07T13:56:37.947000 #16702] ERROR -- : Java::ComTeradataJdbcJdbc_4Util::JDBCException: [Teradata Database] [TeraJDBC 13.10.00.17] [Error 3807] [SQLState 42S02] Object 'ALL_WKSCRATCHPAD_DB.MAX_CALL_ATTEMPT_PARAMETERS' does not exist.: SELECT TOP 1 MAX(CAST(MAX_CALL_ATTEMPT_CNT AS BIGINT)) FROM ALL_WKSCRATCHPAD_DB.MAX_CALL_ATTEMPT_PARAMETERS
Sequel::DatabaseError: Java::ComTeradataJdbcJdbc_4Util::JDBCException: [Teradata Database] [TeraJDBC 13.10.00.17] [Error 3807] [SQLState 42S02] Object 'ALL_WKSCRATCHPAD_DB.MAX_CALL_ATTEMPT_PARAMETERS' does not exist.
I don't understand why this specific bit of code is giving me issues...there does not appear to be anything special about this table or database, and all SQL code executes perfectly fine in Teradata when I am signed in with the same exact user ID that is being used to execute the code from Hudson.

Can't upload images to Redmine anymore

For some strange reason I can't upload images to tickets in Redmine anymore. I can upload a txt file or zip files. When I upload an image in the ticket it either says "Service Unavailable" or "Unprocessable". Weird thing is that it used to work. We updated to the latest Redmine (2.6.0.stable)
I looked at the production.log and this is the error (Can't verify CSRF token authenticity):
Started POST "/uploads.js?attachment_id=1&filename=test.png" for xx.xx.xxx.xxx at 2014-12-03 12:58:49 -0500
Processing by AttachmentsController#upload as JS
Parameters: {"attachment_id"=>"1", "filename"=>"test.png"}
WARNING: Can't verify CSRF token authenticity
Filter chain halted as :verify_authenticity_token rendered or redirected
Completed 422 Unprocessable Entity in 2.6ms (ActiveRecord: 0.3ms)
Here is my Redmine Information:
Default administrator account changed True
Attachments directory writable True
Plugin assets directory writable True
RMagick available (optional) Exclamation
ImageMagick convert available (optional) True
Environment:
Redmine version 2.6.0.stable
Ruby version 1.9.3-p547 (2014-05-14) [x86_64-linux]
Rails version 3.2.19
Environment production
Database adapter Mysql2
SCM:
Git 1.8.2.1
Filesystem
Redmine plugins:
redmine_agile 1.3.2
redmine_ckeditor 1.0.16
redmine_github_hook 2.1.0
redmine_my_page_queries 2.1.6
redmine_theme_changer 0.1.0
It turns out that this was a Varnish Issue. We got around this problem by adding this Varnish rule:
if (req.http.host ~ "my\.domain\.com$") {
return (pipe);
}
Here are some debugging things we did to try to figure out the problem.
Temporarily added config.action_controller.allow_forgery_protection = false to application.rb. When we tried to upload an image it I get a Popup: login required for Server on Redmine API. This gave me a clue that it must have been some kind of server issue.
Created additional_environment.rb and enabled config.log_level = :debug. This added more debug info to the log file.
Started POST "/uploads.js?attachment_id=1&filename=Screen%20Shot%202014-12-11%20at%2010.01.49%20AM.png" for xx.xx.xxx.xxx at 2014-12-11 11:07:41 -0500
Processing by AttachmentsController#upload as JS
Parameters: {"attachment_id"=>"1", "filename"=>"Screen Shot 2014-12-11 at 10.01.49 AM.png"}
^[[1m^[[35m (0.3ms)^[[0m SELECT MAX(`settings`.`updated_on`) AS max_id FROM `settings`
^[[1m^[[36mSetting Load (0.3ms)^[[0m ^[[1mSELECT `settings`.* FROM `settings` WHERE `settings`.`name` = 'rest_api_enabled' LIMIT 1^[[0m
^[[1m^[[35mAnonymousUser Load (0.3ms)^[[0m SELECT `users`.* FROM `users` WHERE `users`.`type` IN ('AnonymousUser') LIMIT 1
Current user: anonymous
^[[1m^[[36mSetting Load (0.3ms)^[[0m ^[[1mSELECT `settings`.* FROM `settings` WHERE `settings`.`name` = 'login_required' LIMIT 1^[[0m
^[[1m^[[35mSetting Load (0.2ms)^[[0m SELECT `settings`.* FROM `settings` WHERE `settings`.`name` = 'force_default_language_for_anonymous' LIMIT 1
^[[1m^[[36mSQL (1.2ms)^[[0m ^[[1mSELECT `members`.`id` AS t0_r0, `members`.`user_id` AS t0_r1, `members`.`project_id` AS t0_r2, `members`.`created_on` AS t0_r3, `members`.`mail_notification` AS t0_r4, `projects`.`id` AS t1_r0, `projects`.`name` AS t1_r1, `projects`.`description` AS t1_r2, `projects`.`homepage` AS t1_r3, `projects`.`is_public` AS t1_r4, `projects`.`parent_id` AS t1_r5, `projects`.`created_on` AS t1_r6, `projects`.`updated_on` AS t1_r7, `projects`.`identifier` AS t1_r8, `projects`.`status` AS t1_r9, `projects`.`lft` AS t1_r10, `projects`.`rgt` AS t1_r11, `projects`.`inherit_members` AS t1_r12, `roles`.`id` AS t2_r0, `roles`.`name` AS t2_r1, `roles`.`position` AS t2_r2, `roles`.`assignable` AS t2_r3, `roles`.`builtin` AS t2_r4, `roles`.`permissions` AS t2_r5, `roles`.`issues_visibility` AS t2_r6 FROM `members` LEFT OUTER JOIN `projects` ON `projects`.`id` = `members`.`project_id` LEFT OUTER JOIN `member_roles` ON `member_roles`.`member_id` = `members`.`id` LEFT OUTER JOIN `roles` ON `roles`.`id` = `member_roles`.`role_id` WHERE `members`.`user_id` = 2 AND (projects.status<>9) ORDER BY projects.name^[[0m
^[[1m^[[35mRole Load (0.2ms)^[[0m SELECT `roles`.* FROM `roles` WHERE `roles`.`builtin` = 2 LIMIT 1
Filter chain halted as :authorize_global rendered or redirected
Completed 401 Unauthorized in 54.3ms (ActiveRecord: 2.7ms)
Current user: anonymous in the log kind of helped lead to the fix.

Resources