Trying to lunch a job workflow via REST API and passing extra variables for the playbook to consume, but returned body shows that provided variables are put in ignored_fields section.
Used POSTMAN and CURL to run the templates both returned the same result
CURL command
curl -X POST http://172.16.0.97/api/v2/job_templates/8/launch/ -H "Content-Type: application/json" -H "Authorization: Bearer Je
gxwfQrdKQXoRUtNWtWFz62FX5bTy" -d "{\"extra_vars\": {\"vendor\":\"juniper\"}}"
Returned body
{"job":34,"ignored_fields":{"extra_vars":{"vendor":"juniper"}},"id":34,"type":"job","url":"/api/v2/jobs/34/","related":{"created_by":"/api/v2/users/1/","modified_by":"/api/v2/users/1/","labels":"/api/v2/jobs/34/labels/","inventory":"/api/v2/inventories/1/","project":"/api/v2/projects/7/","extra_credentials":"/api/v2/jobs/34/extra_credentials/","credentials":"/api/v2/jobs/34/credentials/","unified_job_template":"/api/v2/job_templates/8/","stdout":"/api/v2/jobs/34/stdout/","job_events":"/api/v2/jobs/34/job_events/","job_host_summaries":"/api/v2/jobs/34/job_host_summaries/","activity_stream":"/api/v2/jobs/34/activity_stream/","notifications":"/api/v2/jobs/34/notifications/","job_template":"/api/v2/job_templates/8/","cancel":"/api/v2/jobs/34/cancel/","create_schedule":"/api/v2/jobs/34/create_schedule/","relaunch":"/api/v2/jobs/34/relaunch/"},"summary_fields":{"inventory":{"id":1,"name":"Demo Inventory","description":"","has_active_failures":true,"total_hosts":1,"hosts_with_active_failures":1,"total_groups":0,"groups_with_active_failures":0,"has_inventory_sources":false,"total_inventory_sources":0,"inventory_sources_with_failures":0,"organization_id":1,"kind":""},"project":{"id":7,"name":"Cox-Phase3","description":"","status":"successful","scm_type":"git"},"job_template":{"id":8,"name":"Port Flap","description":""},"unified_job_template":{"id":8,"name":"Port Flap","description":"","unified_job_type":"job"},"created_by":{"id":1,"username":"admin","first_name":"","last_name":""},"modified_by":{"id":1,"username":"admin","first_name":"","last_name":""},"user_capabilities":{"delete":true,"start":true},"labels":{"count":0,"results":[]},"extra_credentials":[],"credentials":[]},"created":"2019-05-14T09:43:16.115516Z","modified":"2019-05-14T09:43:16.177517Z","name":"Port Flap","description":"","job_type":"run","inventory":1,"project":7,"playbook":"main.yml","forks":0,"limit":"","verbosity":1,"extra_vars":"{}","job_tags":"","force_handlers":false,"skip_tags":"","start_at_task":"","timeout":0,"use_fact_cache":false,"unified_job_template":8,"launch_type":"manual","status":"pending","failed":false,"started":null,"finished":null,"elapsed":0.0,"job_args":"","job_cwd":"","job_env":{},"job_explanation":"","execution_node":"","controller_node":"","result_traceback":"","event_processing_finished":false,"job_template":8,"passwords_needed_to_start":[],"ask_diff_mode_on_launch":false,"ask_variables_on_launch":false,"ask_limit_on_launch":false,"ask_tags_on_launch":false,"ask_skip_tags_on_launch":false,"ask_job_type_on_launch":false,"ask_verbosity_on_launch":false,"ask_inventory_on_launch":false,"ask_credential_on_launch":false,"allow_simultaneous":false,"artifacts":{},"scm_revision":"","instance_group":null,"diff_mode":false,"job_slice_number":0,"job_slice_count":1,"credential":null,"vault_credential":null}
According to the fine manual, AWX (and thus Tower) version 3.0 and greater has made extra_vars more strict: https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#ug-jobtemplates-extravars
If you are running a version greater than 3.0, you will need to either turn on playbook survey or set ask_variables_on_launch=True for that template
In my case, I'm using curl -L ... and the payload got lost after the redirect. Be sure to double check that if you find the extra_vars still gets ignored after ensuring ask_variables_on_launch=True.
Tangentially related to the API when utilizing the AWX and Tower CLI I ran into a similar issue of variables not being taken when launching jobs. The solution was that on the Job Template in Tower the "Prompt on Launch" setting needed to checked for the variable to pass through. So much time wasted on such a simple miss.
I have installed elasticsearch version 2.3.2. I have to add index and type to that elasticsearch. Before I used sense plugin to achieve this. But the addon was removed from webstore. Please give suggestion.
Sense plugin is now a Kibana app. Please refer official reference for installation.
The answer of your question is, you can create index and type in Elasticsearch by running below curl command
curl -XPUT "http://localhost:9200/IndexName/TypeName"
You can use a Rest client like postman to do this. You can get the postman as a chrome extension.
The other way is to do an SSH into one of the nodes in your cluster and run the POST command using CURL.
`curl -X POST 'localhost:9200/bookindex/books' -H 'Content-Type: application/json' -d'
{
"bookId" : "A00-3",
"author" : "Sankaran",
"publisher" : "Mcgrahill",
"name" : "how to get a job"
}'
I will automatically create an index named 'bookindex' with type 'books' and index the data. If index and type already exist it will add the entry to the index.
All operations in Elasticsearch can be done via REST API calls.
To create an index use the index API
curl -XPUT 'localhost:9200/twitter?pretty' -H 'Content-Type: application/json' -d'{"settings" : {"index" : {"number_of_shards" : 3, "number_of_replicas" : 0 }}}'
To create the mapping the you can use the _mapping endpoint-
curl -XPUT http://localhost:9200/twitter/tweets/_mapping -d #"create_p4_schema_payload.json"
Here,mapping is provided via a json file name create_p4_schema_payload.json which contains the following-
{
"properties": {
"user_name": {
"type": "text"
}
}
}
All these can be run via any terminal which supports curl. For windows, you may install cygwin to run linux command from command prompt.
Like it was said above, you can access it through REST api calls. The command you need to run is:
curl -XPUT 'http://localhost:9200/IndexName?include_type_name=TypeName'
CURL is a raw text that can be imported into Postman, for example, or you can install it's CLI and simply run it. Simply put:
It's a PUT api call to the ElasticSearch/IndexName, adding the Query Parameter include_type_name.
The reference guide is at: Elastic Search - Create index API
Sense plugin is removed from chrome webstore. You could use Kibana which has sense like dev-tool to perform ElasticSearch queries.
Follow this link to install kibana.
I want to terminate invokhttp processor as soon as it fails, for that I use ExecuteStreamCommand processor I have made bat file with code like this:
curl http://localhost:8080/nifi-api/controller/process-groups/root/processors/f511a6a1-015d-1000-970e-969eac1e6fc5'-X PUT -H 'Accept: application/json'-d #stop.json -vv
and I have related json file with code like this:
{
"status": {
"runStatus": "STOPPED"
},
"component": {
"state": "STOPPED",
"id": "f511a6a1-015d-1000-970e-969eac1e6fc5"
},
"id": "f511a6a1-015d-1000-970e-969eac1e6fc5",
"revision": {
"version": 30,
"clientId": "0343f0b9-015e-1000-7cd8-570f8953ec11"
}
}
I use my jso file as an argument for command inside ExecuteStreamCommand processor bat it throws an exception like this:
What should I change?
all actions in nifi that you can do through web browser you can do through nifi-api.
use google chrome you can press F12 to activate DevTools
(other browsers also has this option)
then select Network tab
do required action on nifi (for example stop the processor)
right-click the request and choose menu copy -> copy as cUrl (bash)
now you have curl command in clipboard that repeats the same nifi action through calling nifi-api
you can remove all headers parameters (-H) except one: -H 'Content-Type: application/json'
so the stop action for my processor will look like this:
curl 'http://localhost:8080/nifi-api/processors/d03bbf8b-015d-1000-f7d6-2f949d44cb7f' -X PUT -H 'Content-Type: application/json' --data-binary '{"revision":{"clientId":"09dbb50e-015e-1000-787b-058ed0938d0e","version":1},"component":{"id":"d03bbf8b-015d-1000-f7d6-2f949d44cb7f","state":"STOPPED"}}'
beware! every time you change processor (even state) its version changes.
so before sending stop request you have to get current version & state.
you have to sent GET request to the same url as above without any additional headers:
http://localhost:8080/nifi-api/processors/d03bbf8b-015d-1000-f7d6-2f949d44cb7f
where d03bbf8b-015d-1000-f7d6-2f949d44cb7f is id of your processor.
you can just try this url in browser but replace the processor id in it.
the response will be in json.
{"revision":
{"clientId":"09dbb50e-015e-1000-787b-058ed0938d0e","version":4},
"id":"d03bbf8b-015d-1000-f7d6-2f949d44cb7f",
"uri":
...a lot of information here about this processor...
}
you can take clientId and version from result and use those attributes to build correct STOP request.
PS:
ExecuteStreamCommand transfers flow file into executing command as an input stream that could cause problems
use ExecuteProcess because you passing all the parameters to curl in command line and not through input stream.
you can stop the nifi processor without using curl - you just need to build correct sequence of processors like this:
InvokeHTTP (get current state) -> EvaluateJsonPath (extract version and clientId) -> ReplaceText (build json for stop using attrs from prev step) -> InvokeHTTP (call stop)
try to avoid the logic of stopping processor from nifi - sure it's possible. just re-think your algorithm.
here is template which show how to stop invokehttp processor :
https://www.dropbox.com/s/uv14kuvk2evy9an/StopInvokeHttpPoceesor.xml?dl=0
I'm trying to parse a json response from a REST API call. My awk is not strong. This is a bash shell script, and I use curl to get the response and write it to a file. My problem is solely trying to cut the response up into useful parts.
The response is all run together on one line and looks like this:
{
"value": {
"summary": "Patch for VMware vCenter Server Appliance 6.5.0",
"install_time": "2017-03-22T22:43:25 UTC",
"product": "VMware vCenter Server Appliance",
"build": "5178943",
"releasedate": "March 14, 2017",
"type": "vCenter Server with an external Platform Services Controller",
"version": "6.5.0.5300"
}
}
I'm interested in simply writing the type, version, and product strings into a log file. Ideally on 3 lines, but I really don't care; I simply need to be able to identify the build etc at the time this backup script ran, so if I need to rebuild & restore I can make sure I have a compatible build.
Your Rest API gives you JSON format, it's best suited for a JSON parser like jq :
curl -s '/rest/endpoint' | jq -r '.value | .type,.version,.product' > config.txt
I'm trying to upgrade our ELK stack from 1.x > 5.x following the re-index from remote instructions. I'm not sure of how to export a list of the indices that I need to create and then import that list into the new instance. I've created a list of indices using this command, both with "pretty," and without, but I'm not sure which file format to use as well as what to next do with that file.
The create index instructions don't go into how to create more than one at a time, and the bulk instructions only refer to creating/indexing documents, not creating the indices themselves. Any assistance on how to best follow the upgrade instructions would be appreciated.
I apparently don't have enough reputation to link the "create index" and "bulk" instructions, so apologies for that.
With a single curl command you could create an index template that will trigger the index creation at the time the documents hit your ES 5.x cluster.
Basically, this single curl command will create an index template that will kick in for each new index created on-the-fly. You can then use the "reindex from remote" technique in order to move your documents from ES 1.x to ES 5.x and don't worry about index creation since the index template will take care of it.
curl -XPUT 'localhost:9200/_template/my_template' -H 'Content-Type: application/json' -d'
{
"template": "*",
"settings": {
"index.refresh_interval" : -1,
"index.number_of_replicas" : 0
}
}
'
Was able to accomplish this with a formatted list of indices created via an index list fed through sed, then feeding that file through the following script:
#! /bin/bash
while read some_index; do
curl -XPUT "localhost:9200/$some_index?pretty" -d'
{
"settings" : {
"index" : {
"refresh_interval" : -1,
"number_of_replicas" : 0
}
}
}'
sleep 1
done <$1
If anyone can point me in the direction of any pre-existing mechanisms in Elasticsearch, though, please do.