I am new to Elasticsearch. What I need to do now is send a specific query to the DB and receive the result from Unity via the rest of the API.
maybe [unity] ---query--> [DB] ---data--> [restful API] ---> [unity] this is I want it program
I make unity navigator IoT...
I want the Unity navmesh agent to move (e.g. coordinates x, y, z) by receiving data from Elasticsearch once every 3 seconds. get request is http:localhost:9200/location/_search?q=x:6.7 this URL result just 1 data but It contains headers, so I don't know how to do it.
elasticsearch send data term 3seconds possible?
how to get data only result no include header
somebody help me..... ㅠㅠㅠㅠㅠㅠㅠ
This is what I did. You can send a GET request to your DB endpoint, and append your query to the end of the url.
https://xxxxx.northamerica-northeast1.gcp.elastic-cloud.com:xxxx/xxxx/_search?&size=1&sort=timestamp:desc
The above query returns me the latest entry in my xxxx cluster.
Once you get the data you wanted, you can use JSON deserialization to deserialize them into your customized object. You are then able to use anything you want without the header.
To get data once per 3 seconds you can use a coroutine to send the GET request every 3 seconds.
Related
I want to create a simple flow. Firstly, I need to take accountIds from REST service and then use received values to create new HTTP request to get token, and then use this token to create some requests with Oauth2.
AccountIds flow:
From InvokeHttp I'll receive some ids in json (REST written in Java and returns List with Integers). 99% chance that there will be only one number. My response looks like: [40]. Now I need to replace square brackets and get this number (using SplitJson). This number I should put to the next getToken as one of the GET param (on a screenshot i hardcode this):
This will return a token. Token is a text/plain;charset=UTF-8. And then, i want to use InvokeHttp again, add atribute Authorization and add to this attribute Bearer + received token. I do not really understand how to use the received data from processors in the following processors. Can someone explain how to reach it with my flow?
The Rest Api provides you with payload body - in nifi terms a flowfile content, you need to parse that incoming content using an evaluatejson (if payload is json - in most cases) and store it in flow attributes.
Then this attributes will be used in the down stream processors.
Also to pass the Authorization to your InvokeHTTP you need to decalre it in the InvokeHTTP porcesor. The ${access_token} is from the upstream attributes extraction.
I need a comprehensive list of what is valid and what is invalid when sending a bulk payload to ElasticSearch.
Bulk endpoint is a indexing endpoint. So at a highlevel you can only send indexing requests to that endpoint.
Since its bulk, a valid request is designed around multiple doc and how they are delimited. e.g. if you can not using a ES client then you will need to have payload formatted as ndjson (new delimited json) and last doc should end in a new line as well. Better to use a client since a client will do this all for you.
Apart from syntax of payload data you can target a index , a type etc in URL.
You can also send some other param like "wait_for_completion", "retry_on_conflict" etc. These are parameter which will control how each requests will behave.
Needless to say but best would be to read up the doc:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
I´m writing a software that work to process a set of data that came from use input process it and send an answer to the user.
The flow starts based on a configured API Callthat start a chain of API calls passing the result of each API for the next one until reachs the final output.
The problem is that the chain of calls is configurable by the user in order to process the data before saving it to the database.
Giving you a little example:
I receive data from an API that has the readings from a field sensor, on the arrival of this data I should do the following things:
Save the data on the database
Process the Data
Based on the data and on a configuration that should be made by the user I should get information from a diferent API (the APIs depend on the content of the data)
Send the information that I got from the other API to a third which will send it back to the sensor
Do you know any solution that´s capable of doing this kind of work?
Doesn´t mather the language or the framework, since it´s a brand new software we are free to start from the very first step.
Thank you
I am considering that, You have a form where user entering the Data, you are receiving the data, processing it and returning the answer based on the Data and your set of rules.
If you have a single form, get the data and pass to the API - RESTFul API
Process the data at server end
Response to client based on the set of rules of user entered data.
If you have Multiple form and coming to user one after one then do same.
Hope the process works. if you could clarify the Requirement more specifically then I can draw the Process in more depth.
I need to simulate a test scenario where my application sends a request with 100s of queries. On the back-end, this request is broken down into requests containing a single query each. So a request from Jmeter with 100 queries will become 100 requests on the back-end. Now - the response from the back-end could either contain the requested data for each of those queries OR contain a unique queryID. Sending back a queryID is server's way of telling that this query is still running. For example, if Jmeter sends a request with 100 queries, it might get back data for 80 and 20 unique queryIDs. So my application under test makes a callback request with those 20 queryIDs every 15 seconds until it gets back the requested data or timeout.
Here is what I have implemented so far.
-main_request_with_100_queries
--XPath_extractor_to_extract_any_queryIDs_found
-if_controller_to_check_if_queryID_MatchNr_is_greater_than_0
--15_second_pause
--beanshell_preprocessor_to_create_the_request_body_with_all_queryIDs
--callback_request_with_queryIDs
What I want to implement is to have another XPath extractor for my callback_request and if any queryIDs are found, then go back to the if_controller
I'm trying to make this work by using a module_controller but so far no luck. Has anyone ever implemented something like this? Can anyone suggest some ideas?
You can use While Controller to keep making the request until there is a queryID in the response.
While Controller [ "${querid.present}"=="true" ]
HTTP Request
Pre Processor [to_create_the_request_body_with_all_queryIDs]
Post Processor [to check for query ID. if no query id - change querid.present to false ]
If possible, try to use Regular Expression Extractor. xpath is very slow and might affect your performance of the script. Check here for more details.
Creating modular test script in JMeter.
I am creating a command line utility using go , which will request particular server for data and will get the data from server in JSON response. Utility can do multiple requests for multiple products .
I am able to do multiple requests and get the response for each product properly in JSON format.
Now , I want to save the result locally in caching or local files. By which on every request I will check the local cache before sending request to server . If data is available for that product then no request will be send .
I want to save the whole json response in cache or local file and keep that data every time before doing any request to server for data.
Use Case :
Products {"A","B","C","D","E"} It could be any number of products
Test 1 : Get data for A,B,C
Check local storage whether data is available or do request.
Save json request in storage for each product.
So for test 1 ,It will have entry like:
[{test 1 : [{product a : json response} , {product b : json response} ,{product c : json response}]}]
And in case if test fails in between and it get results for two products it should save response for 2 products and if we reran the test it will get result for 3rd product only.
There's a bunch of Go libraries to do HTTP caching transparently.
https://github.com/lox/httpcache
https://github.com/gregjones/httpcache
Choose the one you like most and satisfies your needs better, both have examples in their README to get you started real fast.
If for some reason you can't or don't wanna use third-party libraries check out this answer https://stackoverflow.com/a/32885209/322221 which uses httputil.DumpResponse and http.ReadResponse, both on Go's standard library and also the answerer provides an example implementation you can base your work on.
You can store the data inside a Map and get it via Key, you can implement it or use plugin such as go-cache.
As alternative you can use Redis for storing the data, here you can find the driver for Go