I use logstash to index data from a database (in this case Postgres) and put it in an Elasticsearch index. This is my config:
input {
jdbc {
jdbc_driver_library => "/path/to/driver"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "jdbc:postgresql://POSTGRE_HOST:5432/db"
jdbc_user => "postgres"
jdbc_password => "top-secret"
statement => "SELECT id, title, description, username FROM products"
add_field => [ "type", "product" ]
}
}
output {
if [type] == "product" {
elasticsearch {
action => "index"
hosts => "localhost:9200"
index => "products"
document_id => "%{id}"
document_type => "%{type}"
workers => 1
}
}
}
Question: How can I define a mapping for my SQL query, so that e.g. title + description are indexed as text, but user is indexed as keyword data type?
Related
I have a mysql database working as a primary database and i'm ingesting data into elasticsearch from mysql using logstash. I have successfully indexed the users table into elasticsearch and it is working perfectly fine however, my users table has fields interest_id and interest_name which contains the ids and names of user interests as follows:
"interest_id" : "1,2",
"interest_name" : "Business,Farming"
What i'm trying to achieve:
I want to make an object of interests and this object should contain array of interest ids and interests_names like so:
interests : {
[
"interest_name" : "Business"
"interest_id" : "1"
],
[
"interest_name" : "Farming"
"interest_id" : "2"
]
}
Please let me know if its possible and also what is the best approach to achieve this.
My conf:
input {
jdbc {
jdbc_driver_library => "/home/logstash-7.16.3/logstash-core/lib/jars/mysql-connector-java-
8.0.22.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/"
jdbc_user => "XXXXX"
jdbc_password => "XXXXXXX"
sql_log_level => "debug"
clean_run => true
record_last_run => false
statement_filepath => "/home/logstash-7.16.3/config/queries/query.sql"
}
}
filter {
mutate {
remove_field => ["#version", "#timestamp",]
}
}
output {
elasticsearch {
hosts => ["https://XXXXXXXXXXXX:443"]
index => "users"
action => "index"
user => "XXXX"
password => "XXXXXX"
template_name => "myindex"
template => "/home/logstash-7.16.3/config/my_mapping.json"
template_overwrite => true
}
}
I have tried doing this by creating a nested field interests in my mapping and then adding mutate filer in my conf file like this:
mutate {
rename => {
"interest_id" => "[interests][interest_id]"
"interest_name" => "[interests][interest_name]"
}
With this i'm only able to get this output:
"interests" : {
"interest_id" : "1,2",
"interest_name" : "Business,Farming"
}
I am using logstash to ingest elasticsearch. I am using input jdbc, and I am urged by the need to parameterize the inputt jdbc settings, such as the connection string, pass, etc, since I have 10 .conf files where each one has 30 jdbc and 30 output inside.
So, since each file has the same settings, would you like to know if it is possible to do something generic or reference that information from somewhere?
I have this 30 times:...
input {
# Number 1
jdbc {
jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
jdbc_driver_class => "com.informix.jdbc.IfxDriver"
jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
jdbc_user => "xxx"
jdbc_password => "xxx"
schedule => "*/1 * * * *"
statement => "SELECT * FROM public.test ORDER BY id ASC"
tags => "001"
}
# Number 2
jdbc {
jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
jdbc_driver_class => "com.informix.jdbc.IfxDriver"
jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
jdbc_user => "xxx"
jdbc_password => "xxx"
schedule => "*/1 * * * *"
statement => "SELECT * FROM public.test2 ORDER BY id ASC"
tags => "002"
}
[.........]
# Number X
jdbc {
jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
jdbc_driver_class => "com.informix.jdbc.IfxDriver"
jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
jdbc_user => "xxx"
jdbc_password => "xxx"
schedule => "*/1 * * * *"
statement => "SELECT * FROM public.testx ORDER BY id ASC"
tags => "00x"
}
}
filter {
mutate {
add_field => { "[#metadata][mitags]" => "%{tags}" }
}
# Number 1
if "001" in [#metadata][mitags] {
mutate {
rename => [ "codigo", "[properties][codigo]" ]
}
}
# Number 2
if "002" in [#metadata][mitags] {
mutate {
rename => [ "codigo", "[properties][codigo]" ]
}
}
[......]
# Number x
if "002" in [#metadata][mitags] {
mutate {
rename => [ "codigo", "[properties][codigo]" ]
}
}
mutate {
remove_field => [ "#version","#timestamp","tags" ]
}
}
output {
# Number 1
if "001" in [#metadata][mitags] {
# Para ELK
elasticsearch {
hosts => "localhost:9200"
index => "001"
document_type => "001"
document_id => "%{id}"
manage_template => true
template => "/home/user/logstash/templates/001.json"
template_name => "001"
template_overwrite => true
}
}
# Number 2
if "002" in [#metadata][mitags] {
# Para ELK
elasticsearch {
hosts => "localhost:9200"
index => "002"
document_type => "002"
document_id => "%{id}"
manage_template => true
template => "/home/user/logstash/templates/002.json"
template_name => "002"
template_overwrite => true
}
}
[....]
# Number x
if "00x" in [#metadata][mitags] {
# Para ELK
elasticsearch {
hosts => "localhost:9200"
index => "002"
document_type => "00x"
document_id => "%{id}"
manage_template => true
template => "/home/user/logstash/templates/00x.json"
template_name => "00x"
template_overwrite => true
}
}
}
You will still need one jdbc input for each query you need to do, but you can improve your filter and output blocks.
In your filter block you are using the field [#metadata][mitags] to filter your inputs but you are applying the same mutate filter to each one of the inputs, if this is the case you don't need the conditionals, the same mutate filter can be applied to all your inputs if you don't filter it.
Your filter block could be resumed to something as this one.
filter {
mutate {
add_field => { "[#metadata][mitags]" => "%{tags}" }
}
mutate {
rename => [ "codigo", "[properties][codigo]" ]
}
mutate {
remove_field => [ "#version","#timestamp","tags" ]
}
}
In your output block you use the tag just to change the index, document_type and template, you don't need to use conditionals to that, you can use the value of the field as a parameter.
output {
elasticsearch {
hosts => "localhost:9200"
index => "%{[#metadata][mitags]}"
document_type => "%{[#metadata][mitags]}"
document_id => "%{id}"
manage_template => true
template => "/home/unitech/logstash/templates/%{[#metadata][mitags]}.json"
template_name => "iol-fue"
template_overwrite => true
}
}
But this only works if you have a single value in the field [#metadata][mitags], which seems to be the case.
EDIT:
Edited just for history reasons, as noted in the comments, the template config does not allow the use of dynamic parameters as it is only loaded when logstash is starting, the other configs works fine.
Hi All i am using below code for indexing data from MSSql server to elasticsearch but i am not clear about this sql_last_value.
input {
jdbc {
jdbc_driver_library => ""
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://xxxx:1433;databaseName=xxxx;"
jdbc_user => "xxxx"
jdbc_paging_enabled => true
tracking_column => modified_date
tracking_column_type => "timestamp"
use_column_value => true
jdbc_password => "xxxx"
clean_run => true
schedule => "*/1 * * * *"
statement => "Select * from [dbo].[xxxx] where modified_date >:sql_last_value"
}
}
filter {
if [is_deleted] {
mutate {
add_field => {
"[#metadata][elasticsearch_action]" => "delete"
}
}
mutate {
remove_field => [ "is_deleted","#version","#timestamp" ]
}
} else {
mutate {
add_field => {
"[#metadata][elasticsearch_action]" => "index"
}
}
mutate {
remove_field => [ "is_deleted","#version","#timestamp" ]
}
}
}
output {
elasticsearch {
hosts => "xxxx"
user => "xxxx"
password => "xxxx"
index => "xxxx"
action => "%{[#metadata][elasticsearch_action]}"
document_type => "_doc"
document_id => "%{id}"
}
stdout { codec => rubydebug }
}
Where this sql_last_value stored and how to view that physically?
Is it possible to set a customized value to sql_last_value?
Could any one please clarify on above queries?
The sql_last_value is stored in the file called .logstash_jdbc_last_run and according to the docs it is stored in $HOME/.logstash_jdbc_last_run. The file itself contains the timestamp of the last run and it can be set to a specific value.
You should define the last_run_metadata_path parameter for each single jdbc_input_plugin and point to a more specific location, as all running jdbc_input_plugin instances will share the same .logstash_jdbc_last_run file by default and potentially lead into unwanted results.
Rejecting mapping update to [db] as the final mapping would have more than 1 type: [meeting_invities, meetingroom
"}}}}
below is my logstatsh-mysql.conf I have to use multiple table in jdbc input. Please advise
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/db"
jdbc_user => "root"
jdbc_password => "pwd"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM meeting"
tags => "dat_meeting"
}
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/db"
jdbc_user => "root"
jdbc_password => "pwd"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM meeting_invities;"
tags => "dat_meeting_invities"
}
}
output {
stdout { codec => json_lines }
if "dat_meeting" in [tags]{elasticsearch {
hosts => "localhost:9200"
index => "meetingroomdb"
document_type => "meeting"
}
}
if "dat_meeting_invities" in [tags]{elasticsearch {
hosts => "localhost:9200"
index => "meetingroomdb"
document_type => "meeting_invities"
}
}
}
Your elasticsearch output uses the option document_type with two different values, this option sets the _type field of the index, since version 6.X you can only have one type in your index.
This option is deprecated in version 7.X and will be removed in future versions, since elasticsearch is going typeless.
Since elasticsearch won't allow you to index more than one index type, You need to see what is the type of the first document you indexed, this is the type that elasticsearch will use for any future document, use this in both document_type option.
I am new to elastic search. I am using to Logstash to push data from my PostgreSQL Database to elastic index. I usually set the jdbc_page_size => 100000 in the config file for faster ingestion. However, data is not fully pushed even if logstash logs say all data has been pushed. So, I set jdbc_page_size => 25000, which solves my problem
I am facing this problem particularly with PostgesSQL(not with MySQL or MS SQL Server). If anyone has any insight, please clarify why this is happening.
EDIT :
Config File as requested:
input {
jdbc {
jdbc_connection_string => "jdbc:postgresql://ip:5432/dbname"
jdbc_user => "postgres"
jdbc_password => "postgres"
jdbc_driver_library => "/postgresql.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_paging_enabled => true
jdbc_page_size => 25000
statement => "select * from source_table"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "sample"
document_type => "docs"
document_id => "%{id}"
}
}
PostgreSQL does not give records in same order so kindly add order by clause in query, it will solve your issue.
you can try below configuration, it's working.
input {
jdbc {
jdbc_connection_string => "jdbc:postgresql://ip:5432/dbname"
jdbc_user => "postgres"
jdbc_password => "postgres"
jdbc_driver_library => "/postgresql.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_paging_enabled => true
jdbc_page_size => 25000
statement => "select * from source_table order by id desc"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "sample"
document_type => "docs"
document_id => "%{id}"
}
}