I mvn sonar:sonar a project and problems with our code is recorded in the technical debt in sonarqube when we use the default database - i.e. default install of version 4.5. It works, out of the box:
Blocker 0
Critical 0
Major 88
Minor 70
Info 12
I change the datasource to mysql and
Blocker 0
Critical 0
Major 0
Minor 0
Info 0
is recorded. No errors presented by mvn or sonar. I can see all drivers load during mvn run.
In .m2/settings.xml I have
<profile>
<id>sonar</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<sonar.jdbc.url>
jdbc:mysql://sonar.company.io:3306/sonar?useUnicode=true&characterEncoding=utf8
</sonar.jdbc.url>
<sonar.jdbc.username>sonar</sonar.jdbc.username>
<sonar.jdbc.password>pass</sonar.jdbc.password>
<sonar.host.url>
http://sonar.company.io:9000
</sonar.host.url>
<sonar.verbose>true</sonar.verbose>
</properties>
</profile>
I can am able to connect via the hostname using mysql -usonar -ppass -h sonar.company.io and there is content in the database.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| sonar |
mysql> show table status;
+---------------------------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------+----------+----------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+---------------------------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------+----------+----------------+---------+
| action_plans | InnoDB | 10 | Compact | 0 | 0 | 16384 | 0 | 16384 | 4194304 | 1 | 2014-09-30 14:30:17 | NULL | NULL | utf8_bin | NULL | | |
| active_dashboards | InnoDB | 10 | Compact | 5 | 3276 | 16384 | 0 | 32768 | 4194304 | 6 | 2014-09-30 14:30:10 | NULL | NULL | utf8_bin | NULL | | |
| active_rule_parameters | InnoDB | 10 | Compact | 42 | 390 | 16384 | 0 | 0 | 4194304 | 43 | 2014-09-30 14:30:16 | NULL | NULL | utf8_bin | NULL | | |
| active_rules | InnoDB | 10 | Compact | 1099 | 74 | 81920 | 0 | 49152 | 4194304 | 992 | 2014-09-30 14:30:18 | NULL | NULL | utf8_bin | NULL | | |
What could be the reason?
Related
I have Kafka Connect set up to dump data from my Topic into HDFS with Hive enabled:
"confluent.topic.bootstrap.servers": "kafka-1:19092,kafka-2:29092,kafka-3:39092",
"connector.class": "io.confluent.connect.hdfs3.Hdfs3SinkConnector",
"flush.size": "3",
"hdfs.url": "hdfs://namenode:9000",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"logs.dir": "logs",
"name": "kafka to hdfs - repos",
"topics": "repos",
"topics.dir": "topics",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"hive.integration": "true",
"hive.metastore.uris": "thrift://hive-metastore:9083",
"schema.compatibility": "BACKWARD"
When running the task, data is sent to HDFS:
# hdfs dfs -ls /topics/repos/partition=0
Found 3 items
-rw-r--r-- 3 appuser supergroup 281 2022-12-02 13:42 /topics/repos/partition=0/repos+0+0000000000+0000000002.avro
-rw-r--r-- 3 appuser supergroup 294 2022-12-02 13:42 /topics/repos/partition=0/repos+0+0000000003+0000000005.avro
-rw-r--r-- 3 appuser supergroup 283 2022-12-02 13:42 /topics/repos/partition=0/repos+0+0000000006+0000000008.avro
And the table is created in the Hive metastore:
0: jdbc:hive2://localhost:10000> show tables;
+-----------+
| tab_name |
+-----------+
| repos |
+-----------+
But for some reason the table is empty / no data loaded from the HDFS files:
0: jdbc:hive2://localhost:10000> select * from repos;
+------------------+------------------+
| repos.repo_name | repos.partition |
+------------------+------------------+
+------------------+------------------+
When looking at the configuration of the generated table, it looks like the hookup is fine:
0: jdbc:hive2://localhost:10000> DESCRIBE FORMATTED repos;
+-------------------------------+----------------------------------------------------+-----------------------------+
| col_name | data_type | comment |
+-------------------------------+----------------------------------------------------+-----------------------------+
| # col_name | data_type | comment |
| | NULL | NULL |
| repo_name | string | |
| | NULL | NULL |
| # Partition Information | NULL | NULL |
| # col_name | data_type | comment |
| | NULL | NULL |
| partition | string | |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | default | NULL |
| Owner: | null | NULL |
| CreateTime: | Sat Dec 03 11:46:56 UTC 2022 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Retention: | 0 | NULL |
| Location: | hdfs://namenode:9000/topics/repos | NULL |
| Table Type: | EXTERNAL_TABLE | NULL |
| Table Parameters: | NULL | NULL |
| | COLUMN_STATS_ACCURATE | {\"BASIC_STATS\":\"true\"} |
| | EXTERNAL | TRUE |
| | bucketing_version | 2 |
| | numFiles | 0 |
| | numPartitions | 0 |
| | numRows | 0 |
| | rawDataSize | 0 |
| | totalSize | 0 |
| | transient_lastDdlTime | 1670068016 |
| | NULL | NULL |
| # Storage Information | NULL | NULL |
| SerDe Library: | org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe | NULL |
| InputFormat: | org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat | NULL |
| OutputFormat: | org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat | NULL |
| Compressed: | No | NULL |
| Num Buckets: | -1 | NULL |
| Bucket Columns: | [] | NULL |
| Sort Columns: | [] | NULL |
| Storage Desc Params: | NULL | NULL |
| | serialization.format | 1 |
+-------------------------------+----------------------------------------------------+-----------------------------+
I can't seem to find any errors being written anywhere in the logs. I think it could be related to the sub-folder "partition=0", but I am not quite sure how Hive deals with this.
Maybe I am lacking something in the configuration? Or there is something special that has to be done for the data to be loaded?
I have problem with Apache NiFi 1.12.1. For some unknown for me reason CaptureChangeMySQL returns many nulls. Basically, only columns which are int, return correct values. I'm new in a matter of using NiFi so I might miss some obvious thing in configuration.
I have following table:
create table inventory.abc
(
id int auto_increment
primary key,
first_name varchar(100) not null,
last_name varchar(100) not null,
age int not null
);
Processor config:
Bin logs settings:
mysql> show variables like '%bin%';
+--------------------------------------------+--------------------------------+
| Variable_name | Value |
+--------------------------------------------+--------------------------------+
| bind_address | * |
| binlog_cache_size | 32768 |
| binlog_checksum | CRC32 |
| binlog_direct_non_transactional_updates | OFF |
| binlog_error_action | ABORT_SERVER |
| binlog_format | ROW |
| binlog_group_commit_sync_delay | 0 |
| binlog_group_commit_sync_no_delay_count | 0 |
| binlog_gtid_simple_recovery | ON |
| binlog_max_flush_queue_time | 0 |
| binlog_order_commits | ON |
| binlog_row_image | FULL |
| binlog_rows_query_log_events | OFF |
| binlog_stmt_cache_size | 32768 |
| binlog_transaction_dependency_history_size | 25000 |
| binlog_transaction_dependency_tracking | COMMIT_ORDER |
| innodb_api_enable_binlog | OFF |
| innodb_locks_unsafe_for_binlog | OFF |
| log_bin | ON |
| log_bin_basename | /var/lib/mysql/mysql-bin |
| log_bin_index | /var/lib/mysql/mysql-bin.index |
| log_bin_trust_function_creators | OFF |
| log_bin_use_v1_row_events | OFF |
| log_statements_unsafe_for_binlog | ON |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 1073741824 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| sql_log_bin | ON |
| sync_binlog | 1 |
+--------------------------------------------+--------------------------------+
29 rows in set (0.00 sec)
And I get results like this:
Any idea why I get so many nulls in output? I thought it might be related to Distributed Map Cache Client but since this option is not mandatory I don't think that's a problem.
I'm newbie with proxysql, this my enviroment:
Centos 7 , proxysql-1.4.13 . I define 2 host group ids : host group id 2 for mysql server that can write , host group id 3 for mysql server that can read.
All queries that begin with insert, update, delete, alter should be routed to host group id 2.
All queries that begin with select should be routed to host group id 3.
So here are my rules :
Admin> select rule_id,active,digest,match_digest,destination_hostgroup,flagIN,flagOUT,next_query_flagIN,sticky_conn,apply from mysql_query_rules;
+---------+--------+--------------------+--------------+-----------------------+--------+---------+-------------------+-------------+-------+
| rule_id | active | digest | match_digest | destination_hostgroup | flagIN | flagOUT | next_query_flagIN | sticky_conn | apply |
+---------+--------+--------------------+--------------+-----------------------+--------+---------+-------------------+-------------+-------+
| 101 | 1 | NULL | ^insert | 2 | 0 | NULL | NULL | NULL | 1 |
| 102 | 1 | NULL | ^update | 2 | 0 | NULL | NULL | NULL | 1 |
| 103 | 1 | NULL | ^delete | 2 | 0 | NULL | NULL | NULL | 1 |
| 104 | 1 | NULL | ^alter | 2 | 0 | NULL | NULL | NULL | 1 |
| 105 | 1 | NULL | ^select | 3 | 0 | NULL | NULL | NULL | 1 |
+---------+--------+--------------------+--------------+-----------------------+--------+---------+-------------------+-------------+-------+
And they works fine. By the way, are these rules ok ? Should I replace match_digest by match_pattern ?
However , my application has 1 feature that insert data into a table (create booking) then select (almost immediately) new data from that table .
So if select query fails (because new data has not replicated to host group id 3 yet), application feature will run wrong.
I want to route the select query right after insert query to host group id 2 so it will get new data successfully.
I read proxysql document https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_query_rules and this discussion https://github.com/sysown/proxysql/pull/825 , I think this is solution for me ,isn't it ? I still don't understand about these flagIN,flagOUT,next_query_flagIN,sticky_conn stuff clearly but I will give it a try.
I know the insert query digest is 0xCDD6DB677604AFA7
The select query digest is 0x0DCD2E8ADF6A66CB
Then I add 2 new rules:
Admin> select rule_id,active,digest,match_digest,destination_hostgroup,flagIN,flagOUT,next_query_flagIN,sticky_conn,apply from mysql_query_rules;
+---------+--------+--------------------+--------------+-----------------------+--------+---------+-------------------+-------------+-------+
| rule_id | active | digest | match_digest | destination_hostgroup | flagIN | flagOUT | next_query_flagIN | sticky_conn | apply |
+---------+--------+--------------------+--------------+-----------------------+--------+---------+-------------------+-------------+-------+
| 1 | 1 | 0xCDD6DB677604AFA7 | NULL | 2 | 0 | NULL | 1 | 1 | 1 |
| 2 | 1 | 0x0DCD2E8ADF6A66CB | NULL | 2 | 1 | NULL | NULL | NULL | 1 |
| 101 | 1 | NULL | ^insert | 2 | 0 | NULL | NULL | NULL | 1 |
| 102 | 1 | NULL | ^update | 2 | 0 | NULL | NULL | NULL | 1 |
| 103 | 1 | NULL | ^delete | 2 | 0 | NULL | NULL | NULL | 1 |
| 104 | 1 | NULL | ^alter | 2 | 0 | NULL | NULL | NULL | 1 |
| 105 | 1 | NULL | ^select | 3 | 0 | NULL | NULL | NULL | 1 |
+---------+--------+--------------------+--------------+-----------------------+--------+---------+-------------------+-------------+-------+
They work fine, the select query right after insert query is route to host group id 2 so it gets new data successfully and application feature runs ok.
But am I doing right ?
I'm confused because stats_mysql_query_rules result:
Before application feature run:
Admin> select * from stats_mysql_query_rules;
+---------+------+
| rule_id | hits |
+---------+------+
| 1 | 20 |
| 2 | 20 |
| 101 | 33 |
| 102 | 0 |
| 103 | 2 |
| 104 | 0 |
| 105 | 903 |
+---------+------+
After application feature run (10 bookings):
Admin> select * from stats_mysql_query_rules;
+---------+------+
| rule_id | hits |
+---------+------+
| 1 | 30 |
| 2 | 30 |
| 101 | 43 |
| 102 | 0 |
| 103 | 2 |
| 104 | 0 |
| 105 | 1313 |
+---------+------+
Why rule_id 101 hits rate increase from 33 --> 43 ? So rule_id 101 (match_digest ^insert) also match insert query in application feature ? Does it mean I'm doing wrong ?
I run same query in 2 environnements with huge performance différence : 0.015 sec vs 25sec.
Exlain plan :
+------+-------------+---------------+--------+------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------------------------------+------+----------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+---------------+--------+------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------------------------------+------+----------+---------------------------------+
| 1 | SIMPLE | company1_ | const | PRIMARY | PRIMARY | 152 | const | 1 | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | user2_ | ref | PRIMARY | PRIMARY | 152 | const | 1032 | 100.00 | Using where |
| 1 | SIMPLE | vacationpr5_ | eq_ref | PRIMARY | PRIMARY | 304 | user2_.ID_COMPANY_VACATION_PROFILE,.user2_.ID_VACATION_PROFILE | 1 | 100.00 | Using index |
| 1 | SIMPLE | vacationac0_ | ref | PRIMARY,I_VACATION_ACCUMULATION_EA | PRIMARY | 304 | const,.user2_.ID_USER | 4 | 100.00 | Using where |
| 1 | SIMPLE | vacationty3_ | eq_ref | PRIMARY | PRIMARY | 304 | const,.vacationac0_.ID_VACATION_TYPE | 1 | 100.00 | Using where |
| 1 | SIMPLE | vacationst6_ | eq_ref | PRIMARY | PRIMARY | 608 | user2_.ID_COMPANY_VACATION_PROFILE,.user2_.ID_VACATION_PROFILE,const,.vacationac0_.ID_VACATION_TYPE | 1 | 100.00 | Using where |
| 1 | SIMPLE | translatio9_ | eq_ref | PRIMARY | PRIMARY | 919 | vacationty3_.ID_COMPANY_TRANSLATION,.vacationty3_.ID_TRANSLATION | 1 | 100.00 | Using index |
| 1 | SIMPLE | descriptio10_ | eq_ref | PRIMARY, | PRIMARY | 951 | vacationty3_.ID_COMPANY_TRANSLATION,.vacationty3_.ID_TRANSLATION,const | 1 | 100.00 | Using where |
| 1 | SIMPLE | listvalue4_ | ALL | NULL | NULL | NULL | NULL | 5284 | 100.00 | Using where |
| 1 | SIMPLE | translatio7_ | eq_ref | PRIMARY | PRIMARY | 919 | listvalue4_.ID_COMPANY_TRANSLATION,.listvalue4_.ID_TRANSLATION | 1 | 100.00 | Using index |
| 1 | SIMPLE | descriptio8_ | eq_ref | PRIMARY | PRIMARY | 951 | listvalue4_.ID_COMPANY_TRANSLATION,.listvalue4_.ID_TRANSLATION,const | 1 | 100.00 | Using where |
+------+-------------+---------------+--------+------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------------------------------+------+----------+---------------------------------+
next explain plan :
+------+-------------+---------------+--------+------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------------------------------------------+------+----------+-------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+---------------+--------+------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------------------------------------------+------+----------+-------------------------------------------------+
| 1 | SIMPLE | company1_ | const | PRIMARY | PRIMARY | 152 | const | 1 | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | user2_ | ref | PRIMARY, | PRIMARY | 152 | const | 1050 | 100.00 | Using where |
| 1 | SIMPLE | vacationpr5_ | eq_ref | PRIMARY | PRIMARY | 304 | validation2.user2_.ID_COMPANY_VACATION_PROFILE,validation2.user2_.ID_VACATION_PROFILE | 1 | 100.00 | Using index |
| 1 | SIMPLE | vacationac0_ | ref | PRIMARY,I_VACATION_ACCUMULATION_EA | PRIMARY | 304 | const,validation2.user2_.ID_USER | 5 | 100.00 | Using where |
| 1 | SIMPLE | vacationty3_ | eq_ref | PRIMARY | PRIMARY | 304 | const,validation2.vacationac0_.ID_VACATION_TYPE | 1 | 100.00 | Using where |
| 1 | SIMPLE | vacationst6_ | eq_ref | PRIMARY | PRIMARY | 608 | validation2.user2_.ID_COMPANY_VACATION_PROFILE,validation2.user2_.ID_VACATION_PROFILE,const,validation2.vacationac0_.ID_VACATION_TYPE | 1 | 100.00 | Using where |
| 1 | SIMPLE | translatio9_ | eq_ref | PRIMARY | PRIMARY | 919 | validation2.vacationty3_.ID_COMPANY_TRANSLATION,validation2.vacationty3_.ID_TRANSLATION | 1 | 100.00 | Using index |
| 1 | SIMPLE | descriptio10_ | eq_ref | PRIMARY, | PRIMARY | 951 | validation2.vacationty3_.ID_COMPANY_TRANSLATION,validation2.vacationty3_.ID_TRANSLATION,const | 1 | 100.00 | Using where |
| 1 | SIMPLE | listvalue4_ | ALL | NULL | NULL | NULL | NULL | 5282 | 100.00 | Using where; Using join buffer (flat, BNL join) |
| 1 | SIMPLE | translatio7_ | eq_ref | PRIMARY | PRIMARY | 919 | validation2.listvalue4_.ID_COMPANY_TRANSLATION,validation2.listvalue4_.ID_TRANSLATION | 1 | 100.00 | Using index |
| 1 | SIMPLE | descriptio8_ | eq_ref | PRIMARY, | PRIMARY | 951 | validation2.listvalue4_.ID_COMPANY_TRANSLATION,validation2.listvalue4_.ID_TRANSLATION,const | 1 | 100.00 | Using where |
+------+-------------+---------------+--------+------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------------------------------------------+------+----------+-------------------------------------------------+
How I can force to use join buffer (flat, BNL join) the first environment is the production one and has more memory and CPU.
In first environment :
join_buffer_size............ 16777216
join_buffer_space_limit..... 2097152
In second environment :
join_buffer_size............ 262144
join_buffer_space_limit..... 2097152
Is there any link/ratio between join_buffer_size and join_buffer_space_limit?
We configure 16Mo on join_buffer_size because it is a mysqlTuner hint.
I set join_buffer_space_limit at 128Mo and it resolves performance issue.
So mysqlTuner doesn't give hint for this configuration key.
SET GLOBAL join_buffer_space_limit = 1024 * 1024 * 128;
It takes time (hour) to improve performances.
https://mariadb.com/kb/en/library/multi-range-read-optimization/
I like to use mysql client. But when using UTF-8, the tables on the console are unaligned:
> set names utf8;
> [some query]
+--------+---------+---------------------------------+-----------------------------+----------+---------+-----------+-------+---------+-----------+
| RuleId | TaxonId | Note | NoteSci | MinCount | DayFrom | MonthFrom | DayTo | MonthTo | ExtraNote |
+--------+---------+---------------------------------+-----------------------------+----------+---------+-----------+-------+---------+-----------+
| 722 | 10090 | sedmihlásek malý | Hippolais caligata | 1 | 1 | 1 | 31 | 12 | NULL |
| 727 | 10059 | Anseranas semipalmata | husovec strakatý | 1 | 1 | 1 | 31 | 12 | NULL |
| 728 | 10062 | Cygnus atratus | labuť černá | 1 | 1 | 1 | 31 | 12 | NULL |
| 729 | 10094 | Anser cygnoides | husa labutí | 1 | 1 | 1 | 31 | 12 | NULL |
| 730 | 10063 | Tadorna cana | husice šedohlavá | 1 | 1 | 1 | 31 | 12 | NULL |
| 731 | 10031 | Cairina moschata f. domestica | pižmovka domácí | 20 | 1 | 1 | 31 | 12 | NULL |
| 732 | 10088 | Cairina scutulata | pižmovka bělokřídlá | 1 | 1 | 1 | 31 | 12 | NULL |
| 733 | 10087 | Anas sibilatrix | hvízdák chilský | 1 | 1 | 1 | 31 | 12 | NULL |
| 734 | 10077 | Anas platyrhynchos f. domestica | kachna domácí | 1000 | 1 | 1 | 31 | 12 | NULL |
| 735 | 10086 | Anas hottentota | čírka hottentotská | 1 | 1 | 1 | 31 | 12 | NULL |
|
This is apparently because mysql client will compute the widths of the columns using string length which doesn't take UTF-8 characters into account - so then there is exactly one space missing for each accented character (because these actually take two bytes).
Do you know possible workaround for this problem?
Run your mysql client with charset option:
mysql -uUSER -p DATABASE --default-character-set=utf8
(USER and DATABASE should be replaced with actual credentials data)