Logstash: "type" : "index_not_found_exception" - elasticsearch

I am following the Logstash tutorial20.
This command returns the following error:
The following is my configure:
I have two questions:
What is the right get? Who can give me a correct statement?
Why can I see an index was created from the Plgin Head after I run the XGET? --I have never used XPUT.

Related

XSB Runtime errors - MulVal

I'm trying to convert an Nessus scan xml file to MulVal input using a given conversion script and get the following runtime errors:
Error[XSB/Runtime/P]: [Permission (Operation) redefine on imported predicate: lists : member / 2] in compile/1
Error[XSB/Runtime/P]: [Existence (No procedure usermod : vulProperty / 3 exists)]
..and a few more similar 'no procedure usermod : ...' errors
I haven't worked with XSB/Prolog before so if anyone has any idea whats going on or if you want to see some of the source code please let me know

Getting 'Invalid Boolean' error while inserting a file to influx's measurement through shell

I'm trying to insert a file to influx through shell script using write API. Inconsistently data is not getting inserted. I get 'Invalid Boolean' response whenever its failing to insert to influx.
Kindly help me in identifying where i'm doing the mistake
Here is my code to write to influx
curl -s -X POST "http://influxdb:8186/write?db=mydb" --data-binary #data.txt
I get below json response with an error very inconsistently.
I'm generating the data.txt file with after some calculation. Below is the screenshot of the file.
{"error":"unable to parse 'databackupstatus1,env=engg-az-dev2 tenant=dataplatform snapshot_name=engg-az-dev2-dataplatform-2019-07-08_12-43-59 state=Started backup_status=Not-Applicable': invalid boolean\nunable to parse 'databackupstatus1,env=engg-az-dev2 tenant=dataplatform snapshot_name=engg-az-dev2-dataplatform-2019-07-08_12-43-59 state=Completed backup_status=\"SUCCESS\"': invalid boolean"}
Note : The same data above had got inserted multiple times

"semmni" is properly set. (more details) Expected Value : 128 Actual Value : 0

i'm trying to install Oracle11g, and this happened, is there a way to fix this?
i had tried to reboot and run the script runfixup.sh still can't resolve the problem.
I'm trying to install Oracle 11gR2 on Oracle Linux 7.4.
While the installer is performing prerequisite checks, we are getting error:
This is a prerequisite condition to test whether the OS kernel parameter semmni is properly set.
More details :
Expected Value : 128
Actual Value : 0
Now if I execute as root:
/sbin/sysctl -a | grep sem
kernel.sem = 32000 1024000000 500 128
Which means that semmni=128.
Can somebody tell me what I'm I doing wrong?
You need to issue the following command
[root#localhost ~]# /sbin/sysctl -p
the changes to take effect.
And then the value(the rightmost one returning below) might be checked by issuing
[root#localhost ~]# more /proc/sys/kernel/sem
32000 1024000000 500 128

Elastic Search Bulk Request length

I get this error when I try to push data:
[2017-09-28T22:58:13,583][DEBUG][o.e.a.b.TransportShardBulkAction]
[fE76H5K] [sw_shop5_20170928225616][3] failed to execute bulk item
(index) BulkShardRequest [[sw_shop5_20170928225616][3]] containing
[index {[sw_shop5_20170928225616][product][A40482001], source[n/a,
actual length: [41.6kb], max length: 2kb]}]
Can I extend the length in elasticsearch? And If so in the yml File or via curl?
Also I am getting :
Limit of total fields [1000] in index [sw_shop5_20170928231741] has been exceeded
I tried to set it with the curl-call:
curl -XPUT 'localhost:9200/_all/_settings' -d ' { "index.mapping.total_fields.limit": 1000000 }'
But this I can only apply when the index is up already - the software I use always generates a new index and setting it in the eleasticsearch.yml is not possible because I get this:
Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml, in system properties or command line arguments.In order to upgrade all indices the settings must be updated via the /${index}/_settings API. Unless all settings are dynamic all indices must be closed in order to apply the upgradeIndices created in the future should use index templates to set default values.
Please ensure all required values are updated on all indices by executing:
curl -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{ "index.mapping.total_fields.limit" : "100000" }'
With setting this:
index.mapping.total_fields.limit: 100000
Check the full stack trace in the ES log in the server.
I got this same error and the stack trace pointed to a mapping issue:
java.lang.IllegalArgumentException: mapper [my_field] of different type, current_type [keyword], merged_type [text]

How do i prevent elasticsearch's _analyze from interpretting yml

I'm trying to use the _analyze api with text that looks like this:
--- some -- text ---
This request works as expected:
curl localhost:9200/my_index/_analyze -d '--'
{"tokens":[]}
However, this one fails:
curl localhost:9200/medical_documents/_analyze -d '---'
---
error:
root_cause:
- type: "illegal_argument_exception"
reason: "Malforrmed content, must start with an object"
type: "illegal_argument_exception"
reason: "Malforrmed content, must start with an object"
status: 400
Considering the formatting of the response, i assume that elasticsearch tried to parse the request as yaml and failed.
If that is the case, how can i disable yml parsing, or _analyze a text that starts with --- ?
The problem is not the yaml parser. The problem is that you are trying to create a type.
The following is incorrect(will give you Malforrmed content, must start with an object error)
curl localhost:9200/my_index/medical_documents/_analyze -d '---'
This will give you no error, but is incorrect. Because it will tell elastic to create a new type.
curl localhost:9200/my_index/medical_documents/_analyze -d '{"analyzer" : "standard","text" : "this is a test"}'
Analyzers are created Index level. verify with:
curl -XGET 'localhost:9200/my_index/_settings'<br/>
So the proper way is:
curl -XGET 'localhost:9200/my_index/_analyze' -d '{"analyzer" : "your_analyzer_name","text" : "----"}'
Previously need to create the analyzer.

Resources