When I try to run a curl command like:
curl -s -XPOST localhost:9200/_bulk --data-binary "#bulk_prova.elastic"; echo
Where bulk_prova.elastic is:
{ "update" : {"_id" : "1", "_type" : "type1", "_index" : "indexName"} }{ "script" : "ctx._source.topic = \"topicValue\""}
I got this error
{"took":19872,"errors":true,"items":[{"update":{"_index":"indexName","_type":"type1","_id":"1","status":400,"error":{"type":"illegal_argument_exception","reason":"failed to execute script","caused_by":{"type":"script_exception","reason":"scripts of type [inline], operation [update] and lang [groovy] are disabled"}}}}]}
I searched to solve the issue and I've managed the elasticsearch.yml file to enable the dynamic script, but every time that I try to change the file and stop elastic when I restart the elasticsearch service it does not start.
Due to this strange behavior I do not know how to do to solve the issue.
I have the 2.2.0 version and my intention is to add a field to a index (for now) or more than an index (once the problem is solved)
In Elasticsearch 2.3 it has been modified from:
script.disable_dynamic: false
TO:
script.file: true
script.indexed: true
Related
I used my PC as the Spark Server and at the same time as the Spark Worker, using Spark 2.3.1.
At first, I used my Ubuntu 16.04 LTS.
Everything works fine, I tried to run the SparkPi example (using spark-submit and spark-shell)and it is able to run without problem.
I also try to run it using REST API from Spark, with this POST string:
curl -X POST http://192.168.1.107:6066/v1/submissions/create --header "Content-Type:application/json" --data '{
"action": "CreateSubmissionRequest",
"appResource": "file:/home/Workspace/Spark/spark-2.3.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.1.jar",
"clientSparkVersion": "2.3.1",
"appArgs": [ "10" ],
"environmentVariables" : {
"SPARK_ENV_LOADED" : "1"
},
"mainClass": "org.apache.spark.examples.SparkPi",
"sparkProperties": {
"spark.jars": "file:/home/Workspace/Spark/spark-2.3.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.1.jar",
"spark.driver.supervise":"false",
"spark.executor.memory": "512m",
"spark.driver.memory": "512m",
"spark.submit.deployMode":"cluster",
"spark.app.name": "SparkPi",
"spark.master": "spark://192.168.1.107:7077"
}
}'
After testing this and that, I have to move to Windows, since it is will be done on Windows anyway.
I able to run the server and worker (manually), add the winutils.exe, and run the SparkPi example also using spark-shell and spark-submit, everything able to run too.
The problem is when I used the REST API, using this POST string:
curl -X POST http://192.168.1.107:6066/v1/submissions/create --header "Content-Type:application/json" --data '{
"action": "CreateSubmissionRequest",
"appResource": "file:D:/Workspace/Spark/spark-2.3.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.1.jar",
"clientSparkVersion": "2.3.1",
"appArgs": [ "10" ],
"environmentVariables" : {
"SPARK_ENV_LOADED" : "1"
},
"mainClass": "org.apache.spark.examples.SparkPi",
"sparkProperties": {
"spark.jars": "file:D:/Workspace/Spark/spark-2.3.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.1.jar",
"spark.driver.supervise":"false",
"spark.executor.memory": "512m",
"spark.driver.memory": "512m",
"spark.submit.deployMode":"cluster",
"spark.app.name": "SparkPi",
"spark.master": "spark://192.168.1.107:7077"
}
}'
Only the path is a little different, but my worker always failed.
The logs said:
"Exception from the cluster: java.lang.NullPointerException
org.apache.spark.deploy.worker.DriverRunner.downloadUserJar(DriverRunner.scala:151)
org.apache.spark.deploy.worker.DriverRunner.prepareAndRunDriver(DriverRunner.scal173)
org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:92)"
I searched but no solutions has come yet..
So, finally I found the cause.
I read the source from:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/worker/DriverRunner.scala
From inspecting it, I conclude that the problem is not from Spark, but the parameter is not being read correctly. Which means somehow, I put wrong parameter format.
So, after trying out several things, this one is the right one :
appResource": "file:D:/Workspace/Spark/spark-2.3.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.1.jar"
changed to:
appResource": "file:///D:/Workspace/Spark/spark-2.3.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.1.jar"
And I did the same with spark.jars param.
That little differences had cost me almost 24 hours work... ~~~~
I install Elasticsearch with Debian package and installed X-pack in it.
Now, I want to verify if X-Pack is successfully installed.
Is there a simple way to do verify this?
You can call
GET _cat/plugins?v
xpack comes pre-installed from ElasticSearch 6.3 onwards. Refer to : https://www.elastic.co/what-is/open-x-pack for more info on this.
You can check if xpack is installed using: curl -XGET 'http://localhost:9200/_nodes'
The relevant output snippet looks like below:
"attributes": {
"ml.machine_memory": "67447586816",
"xpack.installed": "true",
"transform.node": "true",
"ml.max_open_jobs": "512",
"ml.max_jvm_size": "27917287424"
}
A Kibana newbie would like to know how to set default index pattern programmatically rather than setting it on the Kibana UI through web browser during the first time viewing Kibana UI as mentioned on page https://www.elastic.co/guide/en/kibana/current/setup.html
Elasticsearch stores all Kibana metadata information under .kibana index. Kibana configurations like defaultIndex and advance settings are stored under index/type/id .kibana/config/4.5.0 where 4.5.0 is the version of your Kibana.
So you can achieve setting up or changing defaultIndex with following steps:
Add index to Kibana which you want to set as defaultIndex. You can do that by executing following command:
curl -XPUT http://<es node>:9200/.kibana/index-pattern/your_index_name -d '{"title" : "your_index_name", "timeFieldName": "timestampFieldNameInYourInputData"}'
Change your Kibana config to set index added earlier as defaultIndex:
curl -XPUT http://<es node>:9200/.kibana/config/4.5.0 -d '{"defaultIndex" : "your_index_name"}'
Note: Make sure your giving correct index_name everywhere, valid timestamp field name and kibana version for example if you are using kibana 4.1.1 then you can replace 4.5.0 with 4.1.1 .
In kibana:6.5.3 this can be achieved this calling the kibana api.
curl -X POST "http://localhost:5601/api/saved_objects/index-pattern/logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d'
{
"attributes": {
"title": "logstash-*",
"timeFieldName": "#timestamp"
}
}
'
the Docs are here it does mention that the feature is experimental.
Due to a security vulnerability with ES 1.3.4 we upgraded to ES 1.3.9 which disabled dynamic groovy scripting, as a result the mapping transformations are failing with the error message "dynamic scripting for [groovy] disabled". I tried the approach in https://www.elastic.co/guide/en/elasticsearch/reference/1.3/modules-scripting.html by externalizing the script to a script file but the script file is not getting invoked or the transformation is not working. How can we achieve the transformation using a script file
Mapping file transform is as follows:
"transform" : [{
"script" : "ctx._source['downloadCountInt'] = (ctx._source['downloadCount']==null)? ctx._source['downloadCount'] : ctx._source['downloadCount'].replaceAll(/\\D/, '');",
"lang" : "groovy"
}]
Tried putting the script ctx._source['downloadCountInt'] = (ctx._source['downloadCount']==null)? ctx._source['downloadCount'] : ctx._source['downloadCount'].replaceAll(/\\D/, ''); into a script file named "transform_download_count.groovy" in /etc/elasticsearch/scripts/transform_download_count.groovy and the log messages show that it was compiled correctly but the transformation is never invoked.
With the script file /etc/elasticsearch/scripts/transform_download_count.groovy in place try:
"transform" : {
"script_file" : "transform_download_count",
"lang" : "groovy"
}
I'm trying to make Elasticsearch+Kibana work. For some reason I get a blank Kibana dashboard:
My config.js is a default file with only one line changed:
elasticsearch: "http://127.0.0.1:9200",
Elasticsearch is working correctly, http://127.0.0.1:9200 returns this json:
{
"status" : 200,
"name" : "Ikthalon",
"version" : {
"number" : "1.1.1",
"build_hash" : "f1585f096d3f3985e73456debdc1a0745f512bbc",
"build_timestamp" : "2014-04-16T14:27:12Z",
"build_snapshot" : false,
"lucene_version" : "4.7"
},
"tagline" : "You Know, for Search"
}
But why is my Kibana dashboard blank? Maybe this is because I run it with URL file:///home/sergey/Desktop/kibana-3.1.1/index.html#/dashboard/file/default.json? If so, how do I make it work?
You could open the same file from Firefox and Kibana would work.
Chrome blocks it as a security feature.
You need to run kibana over a server. If you have python installed you can use
cd /path/to/kibana
python -m SimpleHTTPServer
Or if you can put kibana source code in following directories if you are using Apache:
LAMP: /var/www
WAMP: C:/wamp/www
If you're using Logstash, there is an option to run Kibana embedded in Logstash. See -a and -p flags here http://logstash.net/docs/1.4.2/flags
Javascript errors may be occured.In firefox i had 2 errors. fontawesome-webfont.woff and logstash.json files couldnt be found.I added iis MIME Types for .woff and .json. And then problem resolved.