Create index in kibana without using kibana - elasticsearch

I'm very new to the elasticsearch-kibana-logstash and can't seem to find solution for this one.
I'm trying to create index that I will see in kibana without having to use the POST command in Dev Tools section.
I have set test.conf-
input {
file {
path => "/home/user/logs/server.log"
type => "test-type"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "new-index"
}
}
and then
bin/logstash -f test.conf from logstash directory
what i get is that I can't find the new-index in kibana (index patterns section), when I use elasticsearch - http://localhost:9200/new-index/ it presents an error and when I go to http://localhost:9600/ (the port it's showing) it doesn't seem to have any errors
Thanks a lot for the help!!

It's obvious that you won't be able to find the index which you've created using logstash in Kibana, unless you're manually creating it there within the Managemen section of Kibana.
Make sure, that you have the same name of the indice which you created using logstash. Have a look at the doc, which conveys:
When you define an index pattern, indices that match that pattern must
exist in Elasticsearch. Those indices must contain data.
which pretty much says that the indice should exist for you to create the index in Kibana. Hope it helps!

I have actually succeeded to create index even without first creating it in Kibana
I used the following config file -
input {
file {
path => "/home/hadar/tmp/logs/server.log"
type => "test-type"
id => "NEWTRY"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:minute}:%{SECOND:second} - %{LOGLEVEL:level} - %{WORD:scriptName}.%{WORD:scriptEND} - " }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "new-index"
codec => line { format => "%{year}-%{month}-%{day} %{hour}:%{minute}:%{second} - %{level} - %{scriptName}.%{scriptEND} - \"%{message}\"" }
}
}
I made sure that the index wasn't already in Kibana (I tried with other indexes names too just to be sure...) and eventually I did see the index with the log's info in both Kibana (I added it in the index pattern section) and Elasticsearch when I went to http://localhost:9200/new-index
The only thing I should have done was to erase the .sincedb_XXX files which are created under data/plugins/inputs/file/ after every Logstash run
OR
the other solution (for tests environment only) is to add sincedb_path=>"/dev/null" to the input file plugin which indicates to not create the .sincedb_XXX file

You can create directly index in elastic search using https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html
and these indices can be used in Kibana.

Related

How to index a csv document in elasticsearch?

I'm trying to upload some csv files in elastic search. I don't want to mess it up , so I'm writing for some guidance. Can someone help with a video/tutorial/documentation , of how to index a document in elastic search? I've read the official documentation, but I feel a bit lost as a begginer. It will be fine If you'll recommend me a video tutorial , or you'll describe me some steps. Hope you are all doing well! Thank you for your time !
The best way is to use Logstash, which is official and very fast pipeline for elastic ,you can download it from here
First of all create a configuration file as below example and save it as logstashExample.conf in bin directory of logstash.
With assuming that elastic server and kibana console are up and running , run the configuration file with this command "./logstash -f logstashExample.conf" .
I've also added a suitable example of related configuration file for Logstash , please change the index name in output and your file path in input with respect of your need, you can also disable filtering by removing csv components in below example.
input {
file {
path => "/home/timo/bitcoin-data/*.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
#Date,Open,High,Low,Close,Volume (BTC),Volume (Currency),Weighted Price
columns => ["Date","Open","High","Low","Close","Volume (BTC)", "Volume (Currency)" ,"Weighted Price"]
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "bitcoin-prices"
}
stdout {}
}

Elasticsearch is slow to update mapping

Elasticsearch has become extremely slow when receiving input from Logstash, particularly when using file input. I have had to wait for 10+ minutes to get results. Oddly enough, standard input is mapped very quickly, which leads me to believe that my config file may be too complex, but it really isn't.
My config file uses file input, my filter is grok with 4 fields, and my output is to Elasticsearch. The file I am inputting is a .txt with 5 lines in it.
Any suggestions? Elasticsearch and Logstash newbie here.
input {
file {
type => "type"
path => "insert_path"
start_position = > "beginning'
}
}
filter {
grok { match => {"message" => "%{IP:client} %{WORD:primary} %{WORD:secondary} %{NUMBER:speed}" }
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
Thought: Is it a sincedb_path issue? I often try to reparse files without changing the filename.

Is it possible to join 2 lines of logs from 2 separate log files using Logstash

I have a 2 separate IIS log files (advanced and simple)
They need to be joined, because the advanced log is missing information that the simple one has written, but they have the same timestamp.
input { file {
path => "C:/Logs/*.log"
start_position => "beginning"}
}
filter {
grok {
break_on_match => "true"
match => ["message", '%{TIMESTAMP_ISO8601:log_timestamp} \"%{DATA:s_computername}\"']
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{NOTSPACE:uriQuery} "]
}
output {
elasticsearch {
hosts => "localhost:9200"
}
}
}
simplified version. I want elasticsearch to have a entry containing log_timestamp, s_computername, and uriQuery.
Using the multiline codec, it is possible to merge them if the lines were next to each other. As they're in separate files, I have not yet found a way to do this. Is it possible to merge the two using the same timestamp as a unique identifier?

Logstash single input and multiple output

I have configured logstash to get input from one filebeat port.
Filebeat configured with two different paths. is it possible to display logs to two different index?
Logstash input part:
input{
beats
{
type => "stack"
port => 5044
}
Filebeat input part :
prospectors:
paths:
- E://stack/**/*.txt
- E://test/**/*.txt
Now i need to display "stack" in one index and "test" in other index.
How to configure logstash output part?
What you can do is to use the knowledge of the type property in order to decide in which index to store the log being processed.
So your elasticsearch output could simply look like this, i.e. depending on the type value, the selected index will be different.
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{type}"
}
}

To copy an index from one machine to another in elasticsearch

I have some indexes in one of my machines. I need to copy them to another machine, how can i do that in elasticsearch.
I did get some good documentation here, but since im an newbie to elasticsearch ecosystem and since im toying with lesser data indices, I thought I would use some plugins or ways which would be less time consuming.
I would use Logstash with an elasticsearch input plugin and an elasticsearch output plugin.
After installing Logstash, you can create a configuration file copy.conf that looks like this:
input {
elasticsearch {
hosts => "localhost:9200" <--- source ES host
index => "source_index"
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ] <--- remove added junk
}
}
output {
elasticsearch {
host => "localhost" <--- target ES host
port => 9200
protocol => "http"
manage_template => false
index => "target_index"
document_id => "%{id}" <--- name of your ID field
workers => 1
}
}
And then after setting the correct values (source/target host + source/target index), you can run this with bin/logstash -f copy.conf
I can see 3 options here
Snapshot/Restore - You can move your data across geographical locations.
Logstash reindex - As pointed out by Val
Stream2ES - This is a more simpler solution
You can use Snapshot and restore feature as well, where you can take snapshot (backup) of one index and then can Restore to somewhere else.
Just have a look at
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html

Resources