google docs api delete all content - google-api

I noticed from the google docs API
I can do
{
"requests": [
{
"deleteContentRange": {
"range": {
"startIndex": 1,
"endIndex": 80
}
}
}
]
}
but if the endindex is greater than the total length of characters in the document, I get the following error:
{
"error": {
"code": 400,
"message": "Invalid requests[0].deleteContentRange: Index 79 must be less than the end index of the referenced segment, 7.",
"status": "INVALID_ARGUMENT"
}
}
but I just want to delete all of the content, even though I don't know the end range value.
So: is it possible to get the endIndex somehow, or delete all content another way?

You want to delete all contents in Google Document using Docs API.
If my understanding is correct, how about this answer? Please think of this as just one of several possible answers.
Issue:
In the current stage, in order to use "DeleteContentRangeRequest", both values of startIndex and endIndex are required. It seems that this is the specification. So in your case, I think that is it possible to get the endIndex somehow, or delete all content another way? leads to the method for resolving your issue.
Flow of workaround:
Here, as the workaround, the following flow is used.
1. Retrieve the object of content from Google Document.
The sample curl command is as follows. When you use this, please set the Document ID. In this case, body.content(startIndex,endIndex) is used as the fields. By this, it is easy to see the response value.
curl \
'https://docs.googleapis.com/v1/documents/###?fields=body.content(startIndex%2CendIndex)' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json'
The response value is like below.
{
"body": {
"content": [
{"endIndex": 1},
{"startIndex": 1, "endIndex": 100},
{"startIndex": 100, "endIndex": 200}
]
}
}
endIndex of the last index of content is the value for this.
2. Retrieve endIndex from the object.
From above response value, it is found that startIndex and endIndex are 1 and 199, respectively. If endIndex is 200, an error occurs. Please be careful this. So please reduce 1 from it.
3. Delete all contents using startIndex and endIndex.
The sample curl command is as follows.
curl --request POST \
'https://docs.googleapis.com/v1/documents/###:batchUpdate' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{"requests":[{"deleteContentRange":{"range":{"startIndex":1,"endIndex":199}}}]}'
References:
Method: documents.get
Method: documents.batchUpdate
DeleteContentRangeRequest
If I misunderstood your question and this was not the direction you want, I apologize.

Related

API to get price of Binance Smart Chain token on PancakeSwap

I have address of a token and I need to get its price in BUSD or BNB.
It's not a problem to use paid API, if there is no other way. This token may not be listed on popular listings so it would be nice to get price somehow directly from PancakeSwap.
Here is a way to get it directly from PancakeSwap
https://api.pancakeswap.info/api/v2/tokens/0x8076c74c5e3f5852037f31ff0093eeb8c8add8d3
A friend of mine used Moralis.
https://docs.moralis.io/introduction/readme
https://docs.moralis.io/moralis-dapp/web3-api/token#gettokenprice
Maybe you can already do something with the documentation, I have otherwise asked my colleague for example code
curl -X 'GET' \
'https://deep-index.moralis.io/api/v2/erc20/0x42F6f551ae042cBe50C739158b4f0CAC0Edb9096/price?chain=bsc&exchange=PancakeSwapv2' \
-H 'accept: application/json' \
-H 'X-API-Key: MY-API-KEY'
Result:
{
"nativePrice": {
"value": "8409770570506626",
"decimals": 18,
"name": "Ether",
"symbol": "ETH"
},
"usdPrice": 19.722370676,
"exchangeAddress": "0x1f98431c8ad98523631ae4a59f267346ea31f984",
"exchangeName": "Uniswap v3"
}
Greetings.
Alternatively, if you are using React you can try the following package: react-pancakeswap-token-price
you can scrape charts.bogged.finance or poocoin.app

Getting error index.max_inner_result_window during rolling upgrade of ES from 5.6.10 to 6.8.10

I have 2 data nodes and 3 master nodes in an ES cluster. I was doing a rolling upgrade as ES suggested moving from 5.6.10 to 6.8.10.
As there should be zero downtime, I was testing that and getting one error.
I have upgraded the 1 data node and do basic search testing. It is working fine. When I have upgraded 2nd node search is breaking with the below Error.
java.lang.IllegalArgumentException: Top hits result window is too large, the top hits aggregator [top]'s from + size must be less than or equal to: [100] but was [999]. This limit can be set by changing the [index.max_inner_result_window] index level setting.
index.max_inner_result_window -- This property was introduced in the 6.X version, and the master node is still on 5.6.10. So what will be the solution with 0 downtimes?
Note: My indexing is stopped completely. My 2 data nodes are now on 6.8.10 and master nodes are on 5.6.
Thanks
1 - Change the parameter on current indexes:
curl -X PUT "http://localhost:9200/_all/_settings?pretty" -H 'Content-Type: application/json' -d'
{
"index.max_inner_result_window": "2147483647"
}
'
2 - Create a template to further indexes:
curl -X PUT "http://localhost:9200/_index_template/template_max_inner_result?pretty" -H 'Content-Type: application/json' -d'
{
"index_patterns": ["*"],
"template": {
"settings": {
"index":{
"max_inner_result_window": 2147483647
}
}
}
}
'

Backup and restore some records of an elasticsearch index

I wish to take a backup of some records(eg latest 1 million records only) of an Elasticsearch index and restore this backup on a different machine. It would be better if this could be done using available/built-in Elasticsearch features.
I've tried Elasticsearch snapshot and restore (following code), but looks like it takes a backup of the whole index, and not selective records.
curl -H 'Content-Type: application/json' -X PUT "localhost:9200/_snapshot/es_data_dump?pretty=true" -d '
{
"type": "fs",
"settings": {
"compress" : true,
"location": "es_data_dump"
}
}'
curl -H 'Content-Type: application/json' -X PUT "localhost:9200/_snapshot/es_data_dump/snapshot1?wait_for_completion=true&pretty=true" -d '
{
"indices" : "index_name",
"type": "fs",
"settings": {
"compress" : true,
"location": "es_data_dump"
}
}'
The format of backup could be anything, as long as it can be successfully restored on a different machine.
you can use _reinex API. it can take any query. after reindex, you have a new index as backup, which contains requested records. easily copy it where ever you want.
complete information is here: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html
In the end, I fetched the required data using python driver because that is what I found the easiest for the given use case.
For that, I ran an Elasticsearch query and stored its response in a file in newline-separated format and then I later restored data from it using another python script. A maximum of 10000 entries are returned this way along with the scroll ID to be used to fetch next 10000 entries and so on.
es = Elasticsearch(timeout=30, max_retries=10, retry_on_timeout=True)
page = es.search(index=['ct_analytics'], body={'size': 10000, 'query': _query, 'stored_fields': '*'}, scroll='5m')
while len(page['hits']['hits']) > 0:
es_data = page['hits']['hits'] #Store this as you like
scrollId = page['_scroll_id']
page = es.scroll(scroll_id=scrollId, scroll='5m')

How to delete all attributes from the schema in solr?

Deleting all documents from solr is
curl http://localhost:8983/solr/trans/update?commit=true -d "<delete><query>*:*</query></delete>"
Adding a (static) attribute to the schema is
curl -X POST -H 'Content-type:application/json' --data-binary '{ "add-field":{"name":"trans","type":"string","stored":true, "indexed":true},}' http://localhost:8983/solr/trans/schema
Deleting one attribute is
curl -X POST -H 'Content-type:application/json' -d '{ "delete-field":{"name":"trans"}}' http://arteika:8983/solr/trans/schema
Is there a way to delete all attributes from the schema?
At least in version 6.6 of the Schema API and up to the current version 7.5 of it, you can pass multiple commands in a single post (see 6.6 and 7.5 documenation, respectively). There are multiple accepted formats, but the most intuitive one (I think) is just passing an array for the action you want to perform:
curl -X POST -H 'Content-type: application/json' -d '{
"delete-field": [
{"name": "trans"},
{"name": "other_field"}
]
}' 'http://arteika:8983/solr/trans/schema'
So. How do we obtain the names of the fields we want to delete? That can be done by querying the Schema:
curl -X GET -H 'Content-type: application/json' 'http://arteika:8983/solr/trans/schema'
In particular, the copyFields, dynamicFields and fields keys in the schema object in the response.
I automated clearing all copy field rules, dynamic field rules and fields as follows. You can of course use any kind of script that is available to you. I used Python 3 (might work with Python 2, I did not test that).
import json
import requests
# load schema information
api = 'http://arteika:8983/solr/trans/schema'
r = requests.get(api)
# delete copy field rules
names = [(o['source'], o['dest']) for o in r.json()['schema']['copyFields']]
payload = {'delete-copy-field': [{'source': name[0], 'dest': name[1]} for name in names]}
requests.post(api, data = json.dumps(payload),
headers = {'Content-type': 'application/json'})
# delete dynamic fields
names = [o['name'] for o in r.json()['schema']['dynamicFields']]
payload = {'delete-dynamic-field': [{'name': name} for name in names]}
requests.post(api, data = json.dumps(payload),
headers = {'Content-type': 'application/json'})
# delete fields
names = [o['name'] for o in r.json()['schema']['fields']]
payload = {'delete-field': [{'name': name} for name in names]}
requests.post(api, data = json.dumps(payload),
headers = {'Content-type': 'application/json'})
Just a note: I received status 400 responses at first, with null error messages. Had a bit of a hard time figuring out how to fix those, so I'm sharing what worked for me. Changing the default of updateRequestProcessorChain in solrconfig.xml to false (default="${update.autoCreateFields:false}") and restarting the Solr service made those errors go away for me. The fields I was deleting were created automatically, that may have something to do with that.

Create inventory Square API / UNIREST Picture Upload

I want to create a product on my website and have it be created on square (which is working). However I also want to set the initial inventory which is seems there is no way to do it from the documentation. https://docs.connect.squareup.com/api/connect/v1/#post-inventory-variationid
If I go into my square account I can manually set up an initial amount, then query that entry and get the id and update it, but who wants to do anything manually. It defeats the purpose. Is there a way to create an inventory entry?
My second struggle is with uploading an image using unirest.
function uploadItemImage($itemId, $image_file)
{
global $accessToken, $locationId, $connectHost;
$requestHeaders = array
(
'Authorization' => 'Bearer ' . $accessToken,
'Accept' => 'application/json',
'Content-Type' => 'multipart/form-data;'
);
$request_body = array
(
'image_data'=>Unirest\Request\Body::file($image_file, 'text/plain', myproduct.jpg')
);
$response = Unirest\Request::post($connectHost . '/v1/' . $locationId . '/items/'.$itemId.'/image', $requestHeaders, $request_body);
print(json_encode($response->type, JSON_PRETTY_PRINT));
}
where $itemId is taken from the product created earlier and $image_file is the direct link to the file on my server
I keep getting this error...
> PHP Fatal error: Uncaught exception 'Unirest\Exception' with message
> 'couldn't open file "https://somewebsite/myPicture.jpg" ' in
> rootFolder/Unirest/Request.php:479 Stack trace:
> #0 rootFolder/Unirest/Request.php(292): Unirest\Request::send('POST', 'https://connect...', Array, Array, NULL, NULL)
> #1 rootFolder/
Any help is much appreciated!
Way to maximise the use of your question!
There is not currently a way to set initial inventory via API, but new item and inventory management APIs are in the works, read more on the Square Blog
I'm assuming that you are not literally using "https://somewebsite/myPicture.jpg" but it seems like unirest thinks you are trying to use a web url instead of getting a file from your filesystem. Try the following curl command and see if you can match up all the parts to unirest:
:)
curl --request POST \
--url https://connect.squareup.com/v1/XXXXXX/items/XXXXX/image \
--header 'authorization: Bearer sq0atp-XXXXX' \
--header 'cache-control: no-cache' \
--header 'content-type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW' \
--form image_data=#/Users/ManuEng13/Desktop/test.png

Resources