Index not found exception while sending data to Opensearch AWS using td agent - elasticsearch

I have set up Opensearch in AWS. I have installed td-agent in Ubuntu 18.04. Below is my td-agent.conf file:
<source>
#type tail
path /home/rocket/PycharmProjects/EFK/log.json
pos_file /home/rocket/PycharmProjects/EFK/log.json.pos
format json
time_format %Y-%m-%d %H:%M:%S
tag log
</source>
<match *log*>
#type opensearch
host search-tanz-domain-2vbjmk2d4.us-west-2.es.amazonaws.com/
port 9200
scheme https
ssl_verify false
user admin
password lah_001
index_name test
</match>
When running the td-agent I am getting below error:
2023-01-26 15:41:44 +0000 [warn]: #0 Could not communicate to OpenSearch, resetting connection and trying again. [404] {"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index [:9200]","index":":9200","resource.id":":9200","resource.type":"index_or_alias","index_uuid":"_na_"}],"type":"index_not_found_exception","reason":"no such index [:9200]","index":":9200","resource.id":":9200","resource.type":"index_or_alias","index_uuid":"_na_"},"status":404}
So it's saying index not found which is a bit strange because as per my understanding when you send data to Opensearch or Elasticsearch then you need to create index pattern manually by using Kibana. I have never faced this error in Elasticsearch and I am only facing this issue in Opensearch while both of them looks to be same.
Edit
I have created the index using API:
I listed all the index and I can see test:
Now I again started uploading the data using td-agent but still getting the same error as above.

I haven't used td-agent before but based on the configuration file you provided it seems like it is trying to reach index test.
In opensearch, when you create a domain, it doesn't contain any indexes.
when you send data to opensearch or elasticsearch then you need to create index pattern manually by using kibana
I don't think this is necessarily true. You can create an index without using kibana and you can also create an index without sending any data. In fact, I think it is a better practice to create the index first and send the data later.
I think if you create the index test first it should work for you.
In Java:
OpenSearchClient client = new OpenSearchClient(
new AwsSdk2Transport(
httpClient,
host,
region,
AwsSdk2TransportOptions.builder().build()));
// create the index
String index = "test";
CreateIndexRequest createIndexRequest = new CreateIndexRequest.Builder().index(index).build();
client.indices().create(createIndexRequest);
See: Sample code for Amazon OpenSearch Service

Related

How to send logs to multiple outputs with same match tags in Fluentd?

I have a Fluentd instance, and I need it to send my logs matching the fv-back-* tags to Elasticsearch and Amazon S3.
Is there a way to configure Fluentd to send data to both of these outputs? Right now I can only send logs to one source using the <match fv-back-*> config directive.
It is possible using the #type copy directive.
Docs: https://docs.fluentd.org/output/copy

Filebeat > is it possible to send data to Elasticsearch by means of Filebeat without Logstash

I am a newbie of ELK. I installed first Elasticsearch and Filebeat without Logstash, and I would like to send data from Filebeat to Elasticsearch. After I installed the Filebeat and configured the log files and Elasticsearch host, I started the Filebeat, but then nothing happened even though there are lots of rows in the log files, which Filebeats prospects.
So is it possible to forward log data directly to Elasticsearch host without Logstash at all? I
It looks like your ES 2.3.1 is only configured to be reachable from localhost (default since ES 2.0)
You need to modify your elasticsearch.yml file with this and restart ES:
network.host: 168.17.0.100
Then your filebeat output configuration needs to look like this:
output:
elasticsearch:
hosts: ["168.17.0.100:9200"]
Then you can check in your ES filebeat-* indices that you're getting the new log data (i.e. the hits.total count should increase over time):
curl -XGET 168.17.0.100:9200/filebeat-*/_search

How to export logstash data to external database

I've setup logstash instance ,and github logs are forwarding to this logstash instance. I am in need of run the query to fetch the information from logstash like database query.
Please let me know how to connect logstash to oracle database and get those logstash info from DB
thanks
You can query elasticsearch directly via the API - look at the examples in the API documentation here, http://www.elastic.co/guide/en/elasticsearch/reference/1.4/search.html
If you have the logs being captured with a specific 'type', you could do something like:
curl -XGET 'http://localhost:9200/logstash-*/_search?q=type:github&size=50'
which would search the logstash indices for anything with type 'github' and return the first 50 entries.

Kibana: store and load Kibana index from another Elasticsearch server?

Hihi everyone
In the configuration file of Kibana, "config.js" we can only configure elasticsearch address and the name of the kibana index, i would like to be able to configure another ES adress for the the kibana index.
So i could store and load kibana dashboard from/to another ES server that the one i'm requesting data.
Could anyone please help ;) thanks
Hyacinthe

Running Kibana3, LogStash and ElasticSearch, all in one machine

Kibana3 works successfully when ElasticSearch is in a different machine, by setting elasticsearch: "http://different_machine_ip:9200" in config.js of Kibana3.
Now , I want to run all three of them in my local machine for testing. I'm using Windows7 and using Chrome browser. I installed Kibana 3 on Tomcat7. I started the embedded ElasticSearch from LogStash jar file.
I set the ElasticSearch location to "localhost:9200" or "127.0.0.1:9200" or "computer_name:9200". When I check Kibana3 on the browser, the ElasticSearch query revealed via spying has no logstash index.
curl -XGET 'http://localhost:9200//_search?pretty' -d ''
As you can see, the index part is empty, showing // only. The expected query should look like this.
curl -XGET 'http://localhost:9200/logstash-2013.08.13/_search?pretty' -d 'Some JSON Data'
The browser is able to call ElasticSearch API successfully. Example, typing http://localhost:9200/logstash-2013.08.13/_mapping?pretty=true on the address bar returns the mapping of logstash index. This proves that there is no probelm in connecting to ElasticSearch.
The problem here is that the index is empty from Kibana query. Why is the index empty?
Kibana 3 works differently from Kibana 1 and 2. It runs entirely in the browser.
The config file is read by javascript and executed in your browser, so localhost:9200 tells Kibana to look for ElasticSearch running on the laptop in front of you, not the server.
BTW - Recent versions of LogStash has Kibana bundled, so you don't have to host it independently.

Resources