How to run _update elasticsearch with external script - elasticsearch

I want to run example update
curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
"script" : "ctx._source.text = \"some text\""
}'
(http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-update.html), but got error {"error":"ElasticsearchIllegalArgumentException[failed to execute script]; nested: ScriptException[dynamic scripting for [mvel] disabled]; ","status":400}.
From this page http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html, I found out that I need to place my script (I called it demorun.groovy) and run it by name.
I did that, and now try to reference as
curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
"script" : "demorun.groovy"
}'
But still get same error.
I suppose that reference it wrong. How to pass _update with external script?
My demorun.groovy:
ctx._source.text = \"some text\"

The error message you are receiving indicates that dynamic scripting is disabled, which is the default setting. You need to enable to get scripting to work:
Enabling dynamic scripting
We recommend running Elasticsearch behind an application or proxy,
which protects Elasticsearch from the outside world. If users are
allowed to run dynamic scripts (even in a search request), then they
have the same access to your box as the user that Elasticsearch is
running as. For this reason dynamic scripting is allowed only for
sandboxed languages by default.
First, you should not run Elasticsearch as the root user, as this
would allow a script to access or do anything on your server, without
limitations. Second, you should not expose Elasticsearch directly to
users, but instead have a proxy application inbetween. If you do
intend to expose Elasticsearch directly to your users, then you have
to decide whether you trust them enough to run scripts on your box or
not. If you do, you can enable dynamic scripting by adding the
following setting to the config/elasticsearch.yml file on every node:
script.disable_dynamic: false
While this still allows execution of named scripts provided in the
config, or native Java scripts registered through plugins, it also
allows users to run arbitrary scripts via the API. Instead of sending
the name of the file as the script, the body of the script can be sent
instead.
There are three possible configuration values for the
script.disable_dynamic setting, the default value is sandbox:
true: all dynamic scripting is disabled, scripts must be placed in the
config/scripts directory.
false: all dynamic scripting is enabled, scripts may be sent as
strings in requests.
sandbox: scripts may be sent as strings for languages that are
sandboxed.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html

The problem with the ES request above is that it's using the wrong format.
Dynamic scripting (i.e. inline scripting):
curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
"script" : "ctx._source.text = \"some text\""
}'
Static scripting (i.e. offline scripting):
curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
"script" : {
"lang": "groovy",
"script_file": "some-name",
"params": {
"foo": "some text"
}
}
}'
And you are supposed to put
ctx._source.text = foo
into .../elasticsearch/config/scripts/some-name.groovy to reproduce almost the same functionality. Actually, much better because you don't need to open up ES to dynamic scripting, and you get arguments passing.

In elasticsearch 2.0, script.disable_dynamic: false not work, because:
Exception in thread "main" java.lang.IllegalArgumentException:
script.disable_dynamic is not a supported setting, replac e with
fine-grained script settings. Dynamic scripts can be enabled for all
languages and all operations by replacing script.disable_dynamic:
false with s cript.inline: on and script.indexed: on in
elasticsearch.yml
as error says , you need set below in elasticsearch.yml:
script.inline: on
script.indexed: on

I encountered this same issue and resolved it by adding the following code to the elasticsearch.yml file in config folder:
script.disable_dynamic: false

Related

Why Ansible-Tower is ignoring extra variables?

Trying to lunch a job workflow via REST API and passing extra variables for the playbook to consume, but returned body shows that provided variables are put in ignored_fields section.
Used POSTMAN and CURL to run the templates both returned the same result
CURL command
curl -X POST http://172.16.0.97/api/v2/job_templates/8/launch/ -H "Content-Type: application/json" -H "Authorization: Bearer Je
gxwfQrdKQXoRUtNWtWFz62FX5bTy" -d "{\"extra_vars\": {\"vendor\":\"juniper\"}}"
Returned body
{"job":34,"ignored_fields":{"extra_vars":{"vendor":"juniper"}},"id":34,"type":"job","url":"/api/v2/jobs/34/","related":{"created_by":"/api/v2/users/1/","modified_by":"/api/v2/users/1/","labels":"/api/v2/jobs/34/labels/","inventory":"/api/v2/inventories/1/","project":"/api/v2/projects/7/","extra_credentials":"/api/v2/jobs/34/extra_credentials/","credentials":"/api/v2/jobs/34/credentials/","unified_job_template":"/api/v2/job_templates/8/","stdout":"/api/v2/jobs/34/stdout/","job_events":"/api/v2/jobs/34/job_events/","job_host_summaries":"/api/v2/jobs/34/job_host_summaries/","activity_stream":"/api/v2/jobs/34/activity_stream/","notifications":"/api/v2/jobs/34/notifications/","job_template":"/api/v2/job_templates/8/","cancel":"/api/v2/jobs/34/cancel/","create_schedule":"/api/v2/jobs/34/create_schedule/","relaunch":"/api/v2/jobs/34/relaunch/"},"summary_fields":{"inventory":{"id":1,"name":"Demo Inventory","description":"","has_active_failures":true,"total_hosts":1,"hosts_with_active_failures":1,"total_groups":0,"groups_with_active_failures":0,"has_inventory_sources":false,"total_inventory_sources":0,"inventory_sources_with_failures":0,"organization_id":1,"kind":""},"project":{"id":7,"name":"Cox-Phase3","description":"","status":"successful","scm_type":"git"},"job_template":{"id":8,"name":"Port Flap","description":""},"unified_job_template":{"id":8,"name":"Port Flap","description":"","unified_job_type":"job"},"created_by":{"id":1,"username":"admin","first_name":"","last_name":""},"modified_by":{"id":1,"username":"admin","first_name":"","last_name":""},"user_capabilities":{"delete":true,"start":true},"labels":{"count":0,"results":[]},"extra_credentials":[],"credentials":[]},"created":"2019-05-14T09:43:16.115516Z","modified":"2019-05-14T09:43:16.177517Z","name":"Port Flap","description":"","job_type":"run","inventory":1,"project":7,"playbook":"main.yml","forks":0,"limit":"","verbosity":1,"extra_vars":"{}","job_tags":"","force_handlers":false,"skip_tags":"","start_at_task":"","timeout":0,"use_fact_cache":false,"unified_job_template":8,"launch_type":"manual","status":"pending","failed":false,"started":null,"finished":null,"elapsed":0.0,"job_args":"","job_cwd":"","job_env":{},"job_explanation":"","execution_node":"","controller_node":"","result_traceback":"","event_processing_finished":false,"job_template":8,"passwords_needed_to_start":[],"ask_diff_mode_on_launch":false,"ask_variables_on_launch":false,"ask_limit_on_launch":false,"ask_tags_on_launch":false,"ask_skip_tags_on_launch":false,"ask_job_type_on_launch":false,"ask_verbosity_on_launch":false,"ask_inventory_on_launch":false,"ask_credential_on_launch":false,"allow_simultaneous":false,"artifacts":{},"scm_revision":"","instance_group":null,"diff_mode":false,"job_slice_number":0,"job_slice_count":1,"credential":null,"vault_credential":null}
According to the fine manual, AWX (and thus Tower) version 3.0 and greater has made extra_vars more strict: https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#ug-jobtemplates-extravars
If you are running a version greater than 3.0, you will need to either turn on playbook survey or set ask_variables_on_launch=True for that template
In my case, I'm using curl -L ... and the payload got lost after the redirect. Be sure to double check that if you find the extra_vars still gets ignored after ensuring ask_variables_on_launch=True.
Tangentially related to the API when utilizing the AWX and Tower CLI I ran into a similar issue of variables not being taken when launching jobs. The solution was that on the Job Template in Tower the "Prompt on Launch" setting needed to checked for the variable to pass through. So much time wasted on such a simple miss.

How to create index and type in elastic search?

I have installed elasticsearch version 2.3.2. I have to add index and type to that elasticsearch. Before I used sense plugin to achieve this. But the addon was removed from webstore. Please give suggestion.
Sense plugin is now a Kibana app. Please refer official reference for installation.
The answer of your question is, you can create index and type in Elasticsearch by running below curl command
curl -XPUT "http://localhost:9200/IndexName/TypeName"
You can use a Rest client like postman to do this. You can get the postman as a chrome extension.
The other way is to do an SSH into one of the nodes in your cluster and run the POST command using CURL.
`curl -X POST 'localhost:9200/bookindex/books' -H 'Content-Type: application/json' -d'
{
"bookId" : "A00-3",
"author" : "Sankaran",
"publisher" : "Mcgrahill",
"name" : "how to get a job"
}'
I will automatically create an index named 'bookindex' with type 'books' and index the data. If index and type already exist it will add the entry to the index.
All operations in Elasticsearch can be done via REST API calls.
To create an index use the index API
curl -XPUT 'localhost:9200/twitter?pretty' -H 'Content-Type: application/json' -d'{"settings" : {"index" : {"number_of_shards" : 3, "number_of_replicas" : 0 }}}'
To create the mapping the you can use the _mapping endpoint-
curl -XPUT http://localhost:9200/twitter/tweets/_mapping -d #"create_p4_schema_payload.json"
Here,mapping is provided via a json file name create_p4_schema_payload.json which contains the following-
{
"properties": {
"user_name": {
"type": "text"
}
}
}
All these can be run via any terminal which supports curl. For windows, you may install cygwin to run linux command from command prompt.
Like it was said above, you can access it through REST api calls. The command you need to run is:
curl -XPUT 'http://localhost:9200/IndexName?include_type_name=TypeName'
CURL is a raw text that can be imported into Postman, for example, or you can install it's CLI and simply run it. Simply put:
It's a PUT api call to the ElasticSearch/IndexName, adding the Query Parameter include_type_name.
The reference guide is at: Elastic Search - Create index API
Sense plugin is removed from chrome webstore. You could use Kibana which has sense like dev-tool to perform ElasticSearch queries.
Follow this link to install kibana.

Stopping nifi processor without wait/notify processor and curl commands

I want to terminate invokhttp processor as soon as it fails, for that I use ExecuteStreamCommand processor I have made bat file with code like this:
curl http://localhost:8080/nifi-api/controller/process-groups/root/processors/f511a6a1-015d-1000-970e-969eac1e6fc5'-X PUT -H 'Accept: application/json'-d #stop.json -vv
and I have related json file with code like this:
{
"status": {
"runStatus": "STOPPED"
},
"component": {
"state": "STOPPED",
"id": "f511a6a1-015d-1000-970e-969eac1e6fc5"
},
"id": "f511a6a1-015d-1000-970e-969eac1e6fc5",
"revision": {
"version": 30,
"clientId": "0343f0b9-015e-1000-7cd8-570f8953ec11"
}
}
I use my jso file as an argument for command inside ExecuteStreamCommand processor bat it throws an exception like this:
What should I change?
all actions in nifi that you can do through web browser you can do through nifi-api.
use google chrome you can press F12 to activate DevTools
(other browsers also has this option)
then select Network tab
do required action on nifi (for example stop the processor)
right-click the request and choose menu copy -> copy as cUrl (bash)
now you have curl command in clipboard that repeats the same nifi action through calling nifi-api
you can remove all headers parameters (-H) except one: -H 'Content-Type: application/json'
so the stop action for my processor will look like this:
curl 'http://localhost:8080/nifi-api/processors/d03bbf8b-015d-1000-f7d6-2f949d44cb7f' -X PUT -H 'Content-Type: application/json' --data-binary '{"revision":{"clientId":"09dbb50e-015e-1000-787b-058ed0938d0e","version":1},"component":{"id":"d03bbf8b-015d-1000-f7d6-2f949d44cb7f","state":"STOPPED"}}'
beware! every time you change processor (even state) its version changes.
so before sending stop request you have to get current version & state.
you have to sent GET request to the same url as above without any additional headers:
http://localhost:8080/nifi-api/processors/d03bbf8b-015d-1000-f7d6-2f949d44cb7f
where d03bbf8b-015d-1000-f7d6-2f949d44cb7f is id of your processor.
you can just try this url in browser but replace the processor id in it.
the response will be in json.
{"revision":
{"clientId":"09dbb50e-015e-1000-787b-058ed0938d0e","version":4},
"id":"d03bbf8b-015d-1000-f7d6-2f949d44cb7f",
"uri":
...a lot of information here about this processor...
}
you can take clientId and version from result and use those attributes to build correct STOP request.
PS:
ExecuteStreamCommand transfers flow file into executing command as an input stream that could cause problems
use ExecuteProcess because you passing all the parameters to curl in command line and not through input stream.
you can stop the nifi processor without using curl - you just need to build correct sequence of processors like this:
InvokeHTTP (get current state) -> EvaluateJsonPath (extract version and clientId) -> ReplaceText (build json for stop using attrs from prev step) -> InvokeHTTP (call stop)
try to avoid the logic of stopping processor from nifi - sure it's possible. just re-think your algorithm.
here is template which show how to stop invokehttp processor :
https://www.dropbox.com/s/uv14kuvk2evy9an/StopInvokeHttpPoceesor.xml?dl=0

Parse VMware REST API response

I'm trying to parse a json response from a REST API call. My awk is not strong. This is a bash shell script, and I use curl to get the response and write it to a file. My problem is solely trying to cut the response up into useful parts.
The response is all run together on one line and looks like this:
{
"value": {
"summary": "Patch for VMware vCenter Server Appliance 6.5.0",
"install_time": "2017-03-22T22:43:25 UTC",
"product": "VMware vCenter Server Appliance",
"build": "5178943",
"releasedate": "March 14, 2017",
"type": "vCenter Server with an external Platform Services Controller",
"version": "6.5.0.5300"
}
}
I'm interested in simply writing the type, version, and product strings into a log file. Ideally on 3 lines, but I really don't care; I simply need to be able to identify the build etc at the time this backup script ran, so if I need to rebuild & restore I can make sure I have a compatible build.
Your Rest API gives you JSON format, it's best suited for a JSON parser like jq :
curl -s '/rest/endpoint' | jq -r '.value | .type,.version,.product' > config.txt

How to bulk create (export/import) indices in elasticsearch?

I'm trying to upgrade our ELK stack from 1.x > 5.x following the re-index from remote instructions. I'm not sure of how to export a list of the indices that I need to create and then import that list into the new instance. I've created a list of indices using this command, both with "pretty," and without, but I'm not sure which file format to use as well as what to next do with that file.
The create index instructions don't go into how to create more than one at a time, and the bulk instructions only refer to creating/indexing documents, not creating the indices themselves. Any assistance on how to best follow the upgrade instructions would be appreciated.
I apparently don't have enough reputation to link the "create index" and "bulk" instructions, so apologies for that.
With a single curl command you could create an index template that will trigger the index creation at the time the documents hit your ES 5.x cluster.
Basically, this single curl command will create an index template that will kick in for each new index created on-the-fly. You can then use the "reindex from remote" technique in order to move your documents from ES 1.x to ES 5.x and don't worry about index creation since the index template will take care of it.
curl -XPUT 'localhost:9200/_template/my_template' -H 'Content-Type: application/json' -d'
{
"template": "*",
"settings": {
"index.refresh_interval" : -1,
"index.number_of_replicas" : 0
}
}
'
Was able to accomplish this with a formatted list of indices created via an index list fed through sed, then feeding that file through the following script:
#! /bin/bash
while read some_index; do
curl -XPUT "localhost:9200/$some_index?pretty" -d'
{
"settings" : {
"index" : {
"refresh_interval" : -1,
"number_of_replicas" : 0
}
}
}'
sleep 1
done <$1
If anyone can point me in the direction of any pre-existing mechanisms in Elasticsearch, though, please do.

Resources