Near Staking details for an account - nearprotocol

How can I get all the staking details for the address. As I am not getting them from the near archival node. Also they are not available in the transaction details of the address as well. I need the data as shown in the below screen shot a nd URL.
URL: Staking URL
Near Staking details for address 9837b929bde06a8331eb29cc08094e8b44c607e69f294ecc132914c919dc23ef
Tried searching in the archival node for Near but could not get the staking related details for address.
I used the curl query: curl --location --request POST 'http://10.213.67.4:1332/' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "id": "dontcare", "method": "validators", "params": [81834690] }' to get the staking related data for the address 9837b929bde06a8331eb29cc08094e8b44c607e69f294ecc132914c919dc23ef as it was present in the epoch 81834690, could not find this address in the response that it returned.

Related

How to disable refresh interval in elastic search, does it require restart of node/cluster after updating configuration?

How to disable the refresh interval in elastic search.
Once we disable refresh interval, does it require restart of node or cluster ?
Is the below method is correct way of disabling refresh interval ? Just checking because, I'm doing this on production server which has heavy load and data is in billion ( bit worried because of this).
curl -X PUT "localhost:9200/my-index-000001/_settings?pretty" -H 'Content-Type: application/json' -d'
{
"index" : {
"refresh_interval" : "-1"
}
}
Yes. That's the correct way of disabling refresh_interval.
And no, you needn't restart your Cluster because this is a dynamic setting as described here.
Note that you can also disable refresh_interval on multiple indices by using my-index-*.

How to get number of current open shards in elsticsearch cluster?

I can't find where to get the number of current open shards.
I want to make monitoring to avoid cases like this:
this cluster currently has [999]/[1000] maximum shards open
I can get maximum limit - max_shards_per_node
$ curl -X GET "${ELK_HOST}/_cluster/settings?include_defaults=true&flat_settings=true&pretty" 2>/dev/null | grep cluster.max_shards_per_node
"cluster.max_shards_per_node" : "1000",
$
But can't find out how to get number of the current open shards (999).
A very simple way to get this information is to call the _cat/shards API and count the number of lines using the wc shell command:
curl -s -XGET ${ELK_HOST}/_cat/shards | wc -l
That will yield a single number that represents the number of shards in your cluster.
Another option is to retrieve the cluster stats using JSON format, pipe the results into jq and then grab whatever you want, e.g. below I'm counting all STARTED shards:
curl -s -XGET ${ELK_HOST}/_cat/shards?format=json | jq ".[].state" | grep "STARTED" | wc -l
Yet another option is to query the _cluster/stats API:
curl -s -XGET ${ELK_HOST}/_cluster/stats?filter_path=indices.shards.total
That will return a JSON with the shard count
{
"indices" : {
"shards" : {
"total" : 302
}
}
}
To my knowledge there is no single number that ES spits out from any API with the single number. To be sure of that, let's look at the source code.
The error is thrown from IndicesService.java
To see how currentOpenShards is computed, we can then go to Metadata.java.
As you can see, the code is iterating over the index metadata that is retrieved from the cluster state, pretty much like running the following command and count the number of shards, but only for indices with "state" : "open"
GET _cluster/state?filter_path=metadata.indices.*.settings.index.number_of*,metadata.indices.*.state
From that evidence, we can pretty much be sure that the single number you're looking for is nowhere to be found, but needs to be computed by one of the methods I showed above. You're free to open a feature request if needed.
The problem: Seems that your elastic cluster number of shards per node are getting limited.
Solution:
Verify the number of shards per node in your configuration and increase it using elastic API.
For getting the number of shards - use _cluster/stats API:
curl -s -XGET 'localhost/_cluster/stats?filter_path=indices.shards.total'
From elastic docs:
The Cluster Stats API allows to retrieve statistics from a cluster
wide perspective. The API returns basic index metrics (shard numbers,
store size, memory usage) and information about the current nodes that
form the cluster (number, roles, os, jvm versions, memory usage, cpu
and installed plugins).
For updating number of shards (increasing/decreasing), use - _cluster/settings api:
For example:
curl -XPUT -H 'Content-Type: application/json' 'localhost:9200/_cluster/settings' -d '{ "persistent" : {"cluster.max_shards_per_node" : 5000}}'
From elastic docs:
With specifications in the request body, this API call can update
cluster settings. Updates to settings can be persistent, meaning they
apply across restarts, or transient, where they don’t survive a full
cluster restart.
You can reset persistent or transient settings by assigning a null
value. If a transient setting is reset, the first one of these values
that is defined is applied:
the persistent setting the setting in the configuration file the
default value. The order of precedence for cluster settings is:
transient cluster settings persistent cluster settings settings in the
elasticsearch.yml configuration file. It’s best to set all
cluster-wide settings with the settings API and use the
elasticsearch.yml file only for local configurations. This way you can
be sure that the setting is the same on all nodes. If, on the other
hand, you define different settings on different nodes by accident
using the configuration file, it is very difficult to notice these
discrepancies.
curl -s '127.1:9200/_cat/indices' | awk '{ if ($2 == "open") C+=$5*$6} END {print C}'
This works:
GET /_stats?level=shards&filter_path=_shards.total
Reference:
https://stackoverflow.com/a/38108448/4271117

API call to capture the Daily Step Count from google fit api's

I am trying to get daily Step count to display on my Personal Dashboard which I plan to run as an end of the job.
When I try to access this api end point with access token and required authentication, I am having no luck. It's only allowing me to enter a small dataset range.
https://www.googleapis.com/fitness/v1/users/me/dataSources/derived:com.google.step_count.delta:com.google.android.gms:estimated_steps/datasets/1470475368-1471080168
Any help to retrieve a daily step count if possible with a single call, will be really appreciated.
Read the Daily Steps Total using the Fit Android API and Fit REST API.
Your app can read the current daily step count total across all data
sources by making a POST request and querying the
com.google.step_count.delta data type for the specified time period.
HTTP method
POST
Request URL
https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate
Request body
{
"aggregateBy": [{
"dataTypeName": "com.google.step_count.delta",
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps"
}],
"bucketByTime": { "durationMillis": 86400000 },
"startTimeMillis": 1438705622000,
"endTimeMillis": 1439310422000
}
Curl command
curl \
-X POST \
-H "Content-Type: application/json;encoding=utf-8" \
-H "Authorization: Bearer $ACCESS_TOKEN" \
-d #aggregate.json \
https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate

Invalid Alphanumeric 'From' address with Twilio SMS

I am trying to send an SMS with Twilio using the alphanumeric 'from' address. I'm in Australia, sending to an Australian mobile number. My cURL request looks like this:
curl -X POST 'https://api.twilio.com/2010-04-01/Accounts/<Account SID>/Messages.json' \
--data-urlencode 'To=+614XXXXXXXX' \
--data-urlencode 'From=Test' \
--data-urlencode 'Body=Test' \
-u <Account SID>:<Auth Token>
The response I am receiving is:
{
"code": 21212,
"message": "The 'From' number Test is not a valid phone number, shortcode, or alphanumeric sender ID.",
"more_info": "https://www.twilio.com/docs/errors/21212",
"status": 400
}
I've tried a number of different alphanumeric sender IDs and none have been successful. My sender ID does appear to meet the [a-zA-Z0-9 ] with max length 11 and at least 1 alpha character requirement. I've double checked my Account SID and Auth Token as well as the ability to send Alphanumeric SMS within Australia. I've also verified I'm not accidentally using my Test code incase that would have any effect. My account also does have credit on it.
Thanks in advance for any insight you can offer :)
Twilio developer evangelist here.
You're doing everything right for sending a message using an alphanumeric sender ID. However, in order to get your account setup for sending these messages you need to contact support and request your account is enabled to use alphanumeric sender IDs.
Let me know if that helps at all.

High disk watermark exceeded even when there is not much data in my index

I'm using elasticsearch on my local machine. The data directory is only 37MB in size but when I check logs, I can see:
[2015-05-17 21:31:12,905][WARN ][cluster.routing.allocation.decider] [Chrome] high disk watermark [10%] exceeded on [h9P4UqnCR5SrXxwZKpQ2LQ][Chrome] free: 5.7gb[6.1%], shards will be relocated away from this node
Quite confused about what might be going wrong. Any help?
From Index Shard Allocation:
... watermark.high controls the high watermark. It defaults to 90%, meaning ES will attempt to relocate shards to another node if the node disk usage rises above 90%.
The size of your actual index doesn't matter; it's the free space left on the device which matters.
If the defaults are not appropriate for you, you've to change them.
To resolve the issue in which, the log is recorded as:
high disk watermark [90%] exceeded on
[ytI5oTyYSsCVfrB6CWFL1g][ytI5oTy][/var/lib/elasticsearch/nodes/0]
free: 552.2mb[4.3%], shards will be relocated away from this node
You can update the threshold limit by executing following curl request:
curl -XPUT "http://localhost:9200/_cluster/settings" \
-H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster": {
"routing": {
"allocation.disk.threshold_enabled": false
}
}
}
}'
this slightly modified curl command from the Elasticsearch 6.4 docs worked for me:
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"transient": {
"cluster.routing.allocation.disk.watermark.low": "2gb",
"cluster.routing.allocation.disk.watermark.high": "1gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "500mb",
"cluster.info.update.interval": "1m"
}
}
'
if the curl -XPUT command succeeds, you should see logs like this in the Elasticsearch terminal window:
[2018-08-24T07:16:05,584][INFO ][o.e.c.s.ClusterSettings ] [bhjM1bz] updating [cluster.routing.allocation.disk.watermark.low] from [85%] to [2gb]
[2018-08-24T07:16:05,585][INFO ][o.e.c.s.ClusterSettings ] [bhjM1bz] updating [cluster.routing.allocation.disk.watermark.high] from [90%] to [1gb]
[2018-08-24T07:16:05,585][INFO ][o.e.c.s.ClusterSettings ] [bhjM1bz] updating [cluster.routing.allocation.disk.watermark.flood_stage] from [95%] to [500mb]
[2018-08-24T07:16:05,585][INFO ][o.e.c.s.ClusterSettings ] [bhjM1bz] updating [cluster.info.update.interval] from [30s] to [1m]
https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html
Its a warning and won't affect anything. The storage processors (SPs) use high and low watermarks to determine when to flush their write caches.
The possible solution can be to free some memory
And the warning will disappear. Even with it showing, the replicas will not be assigned to the node which is okay. The elasticsearch will work fine.
Instead of percentage I use absolute values and rise values for better space use (in pre-prod):
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.disk.threshold_enabled": true,
"cluster.routing.allocation.disk.watermark.low": "1g",
"cluster.routing.allocation.disk.watermark.high": "500m",
"cluster.info.update.interval": "5m"
}
}
Also I reduce pooling interval to make ES logs shorter ))
Clear up some space on your hard drive, that should fix the issue. This shall also change the health of your ES clusters from Yellow to green (if you got the above issue, you are most likely to face the yellow cluster health issue as well).

Resources