Logstash Indexing - elasticsearch

I would like to create two separate indexes for two different systems that are sending data to the logstash server setup for udp - syslog. In Elasticsearch, I created an Index called CiscoASA01 and another Index called CiscoASA02. How can I configure Logstash to filter all events coming from the first device to go into the CiscoASA01 index and the events coming from the second device to go to the second index? Thank you.

You can use if to separate the logs. Assume your first device is CiscoASA01 & second is CiscoASA02.
Here is the output
output {
if [host] == "CiscoASA01"
{
elasticsearch {
host => "elasticsearch_server"
index => "CiscoASA01"
}
}
if [host] == "CiscoASA02"
{
elasticsearch {
host => "elasticsearch_server"
index => "CiscoASA02"
}
}
}
The [host] is the field in logstash event. You can use it to separate the log to different output.
Hope this can help you.

Related

elastic stack : i need set Time Filter field name with another field

i need read messages(content is logs) from rabbitMq by logstash and then send that to elasticsearch for make visualize monitoring in kibana. so i wrote input for read from rabbitmq in logstash like this:
input {
rabbitmq {
queue => "testLogstash"
host => "localhost"
}
}
and i wrote output configuration for store in elasticsearch in logstash like this:
output {
elasticsearch{
hosts => "http://localhost:9200"
index => "d13-%{+YYYY.MM.dd}"
}
}
Both of them are placed in myConf.conf
In the content of each message, there is a Json that contains the fields like this:
{
"mDate":"MMMM dd YYYY, HH:mm:ss.SSS"
"name":"test name"
}
But there are two problems. First, there is no date field in the field of creating a new index(Time Filter field name). Second, I use the same timestamp as the default #timestamp, this field will not be displayed in the build type of graphs. I think the reason for this is because of the data type of the field. The field is of type date, but the string is considered.
i try to convert value of field to date by mutate in logstash config like this:
filter {
mutate {
convert => { "mdate" => "date" }
}
}
Now, two questions arise:
1- Is this the problem? If yes What is the right solution to fix it?
2- My main need is to use the time when messages are entered in the queue, not when Logstash takes them. What is the best solution?
If you don't specify a value for #timestamp, you should get the current system time when elasticsearch indexes the document. With that, you should be able to see items in kibana.
If I understand you correctly, you'd rather use you mDate field for #timestamp. For this, use the date{} filter in logstash.

Create index in kibana without using kibana

I'm very new to the elasticsearch-kibana-logstash and can't seem to find solution for this one.
I'm trying to create index that I will see in kibana without having to use the POST command in Dev Tools section.
I have set test.conf-
input {
file {
path => "/home/user/logs/server.log"
type => "test-type"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "new-index"
}
}
and then
bin/logstash -f test.conf from logstash directory
what i get is that I can't find the new-index in kibana (index patterns section), when I use elasticsearch - http://localhost:9200/new-index/ it presents an error and when I go to http://localhost:9600/ (the port it's showing) it doesn't seem to have any errors
Thanks a lot for the help!!
It's obvious that you won't be able to find the index which you've created using logstash in Kibana, unless you're manually creating it there within the Managemen section of Kibana.
Make sure, that you have the same name of the indice which you created using logstash. Have a look at the doc, which conveys:
When you define an index pattern, indices that match that pattern must
exist in Elasticsearch. Those indices must contain data.
which pretty much says that the indice should exist for you to create the index in Kibana. Hope it helps!
I have actually succeeded to create index even without first creating it in Kibana
I used the following config file -
input {
file {
path => "/home/hadar/tmp/logs/server.log"
type => "test-type"
id => "NEWTRY"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:minute}:%{SECOND:second} - %{LOGLEVEL:level} - %{WORD:scriptName}.%{WORD:scriptEND} - " }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "new-index"
codec => line { format => "%{year}-%{month}-%{day} %{hour}:%{minute}:%{second} - %{level} - %{scriptName}.%{scriptEND} - \"%{message}\"" }
}
}
I made sure that the index wasn't already in Kibana (I tried with other indexes names too just to be sure...) and eventually I did see the index with the log's info in both Kibana (I added it in the index pattern section) and Elasticsearch when I went to http://localhost:9200/new-index
The only thing I should have done was to erase the .sincedb_XXX files which are created under data/plugins/inputs/file/ after every Logstash run
OR
the other solution (for tests environment only) is to add sincedb_path=>"/dev/null" to the input file plugin which indicates to not create the .sincedb_XXX file
You can create directly index in elastic search using https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html
and these indices can be used in Kibana.

Logstash doc_as_upsert cross index in Elasticsearch to eliminate duplicates

I have a logstash configuration that uses the following in the output block in an attempt to mitigate duplicates.
output {
if [type] == "usage" {
elasticsearch {
hosts => ["elastic4:9204"]
index => "usage-%{+YYYY-MM-dd-HH}"
document_id => "%{[#metadata][fingerprint]}"
action => "update"
doc_as_upsert => true
}
}
}
The fingerprint is calculated from a SHA1 hash of two unique fields.
This works when logstash sees the same doc in the same index, but since the command that generates the input data doesn't have a reliable rate at which different documents appear, logstash will sometimes insert duplicates docs in a different date stamped index.
For example, the command that logstash runs to get the input generally returns the last two hours of data. However, since I can't definitively tell when a doc will appear/disappear, I tun the command every fifteen minutes.
This is fine when the duplicates occur within the same hour. However, when the hour or day date stamp rolls over, and the document still appears, elastic/logstash thinks it's a new doc.
Is there a way to make the upsert work cross index? These would all be the same type of doc, they would simply apply to every index that matches "usage-*"
A new index is an entirely new keyspace and there's no way to tell ES to not index two documents with the same ID in two different indices.
However, you could prevent this by adding an elasticsearch filter to your pipeline which would look up the document in all indices and if it finds one, it could drop the event.
Something like this would do (note that usages would be an alias spanning all usage-* indices):
filter {
elasticsearch {
hosts => ["elastic4:9204"]
index => "usages"
query => "_id:%{[#metadata][fingerprint]}"
fields => {"_id" => "other_id"}
}
# if the document was found, drop this one
if [other_id] {
drop {}
}
}

Logstash single input and multiple output

I have configured logstash to get input from one filebeat port.
Filebeat configured with two different paths. is it possible to display logs to two different index?
Logstash input part:
input{
beats
{
type => "stack"
port => 5044
}
Filebeat input part :
prospectors:
paths:
- E://stack/**/*.txt
- E://test/**/*.txt
Now i need to display "stack" in one index and "test" in other index.
How to configure logstash output part?
What you can do is to use the knowledge of the type property in order to decide in which index to store the log being processed.
So your elasticsearch output could simply look like this, i.e. depending on the type value, the selected index will be different.
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{type}"
}
}

To copy an index from one machine to another in elasticsearch

I have some indexes in one of my machines. I need to copy them to another machine, how can i do that in elasticsearch.
I did get some good documentation here, but since im an newbie to elasticsearch ecosystem and since im toying with lesser data indices, I thought I would use some plugins or ways which would be less time consuming.
I would use Logstash with an elasticsearch input plugin and an elasticsearch output plugin.
After installing Logstash, you can create a configuration file copy.conf that looks like this:
input {
elasticsearch {
hosts => "localhost:9200" <--- source ES host
index => "source_index"
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ] <--- remove added junk
}
}
output {
elasticsearch {
host => "localhost" <--- target ES host
port => 9200
protocol => "http"
manage_template => false
index => "target_index"
document_id => "%{id}" <--- name of your ID field
workers => 1
}
}
And then after setting the correct values (source/target host + source/target index), you can run this with bin/logstash -f copy.conf
I can see 3 options here
Snapshot/Restore - You can move your data across geographical locations.
Logstash reindex - As pointed out by Val
Stream2ES - This is a more simpler solution
You can use Snapshot and restore feature as well, where you can take snapshot (backup) of one index and then can Restore to somewhere else.
Just have a look at
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html

Resources