Few Near Blocks missing in mainnet but present in testnet and it shows as testnet specific blocks. How to get those block details from mainnet - nearprotocol

There are few blocks in near which are missing in mainnet but present in testnet and it shows as testnet specific blocks. Please suggest how do we consider these blocks or get these blocks using the api "https://archival-rpc.mainnet.near.org". Below is the scenario for one of the block
If I try to get the block details of the block 73685420 by using the below curl query:
curl --location --request POST 'https://archival-rpc.mainnet.near.org' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"id": "dontcare",
"method": "block",
"params": {
"block_id": 73685420
}
}'
I get the below output
{
"jsonrpc": "2.0",
"error": {
"name": "HANDLER_ERROR",
"cause": {
"info": {},
"name": "UNKNOWN_BLOCK"
},
"code": -32000,
"message": "Server error",
"data": "DB Not Found Error: BLOCK HEIGHT: 73685420 \n Cause: Unknown"
},
"id": "dontcare"
}
But when I searched the above block in testnet explorer I was able to get it.
how to get the details from mainnet?
Testnet Explorer Block

maybe you can find the answer here: Why Blocks are Missing or Skipped on NEAR
Just a little explanation about the info inside the link:
The blocks are very fast in NEAR Protocol, and the transactions should resolve quickly, sometimes is the specifict validator answer late the block is skipped and the transactions resolved in next, this works as expected.
But thats the reason of the skipped blocks.

Related

Invalid Argument on API Call?

I am getting an invalid argument with the following API Call (following https://developers.google.com/nest/device-access/api/doorbell-battery#webrtc):
curl -X POST 'https://smartdevicemanagement.googleapis.com/v1/enterprises/projectID/devices/deviceID:executeCommand' -H 'Content-Type: application/json'
-H 'Authorization: AUTHTOKEN' --data-raw '{
"command" : "sdm.devices.commands.CameraLiveStream.GenerateWebRtcStream",
"params" : {
"offerSdp" : "a=recvonly"
}
}'
Response from server:
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}
What is the invalid argument?
My impression is that is not a valid offer, and you need to use a web rtc client to create it. See webrtc.org for examples.
"offerSdp" : "a=recvonly" isn't a valid offer, but also you will get that INVALID_ARGUMENT error if you don't end your offer string with a \r\n character.

Near mainnet api: Error Block Missing (unavailable on the node)

I was testing near apis and only a few endpoints are working as expected.
https://rpc.mainnet.near.org
I was trying to fetch the block by id and it was throwing this error.
{
"jsonrpc": "2.0",
"error": {
"code": -32000,
"message": "Server error",
"data": "Block Missing (unavailable on the node): BBht2EZwfrGrucZKUuW91tMctfE3rMsUQJcFSduTRCGR \n Cause: Unknown"
},
"id": "dontcare"
}
The final block call is working and it is even working for few 50 blocks back but for old blocks it is throwing above error.
Is there any range of blocks this api supports?
Can I rely on this api to fetch historical data?
curl request
curl --location --request POST 'https://rpc.mainnet.near.org' --header 'Content-Type: application/json' --data-raw '{
"jsonrpc": "2.0",
"id": "dontcare",
"method": "block",
"params": {
"block_id": 33929500
}
}'
This block was garbage collected. Regular nodes only maintain blocks for the last 5 epochs, if you need historical data you should query instead archival nodes (https://archival-rpc.mainnet.near.org)
See this answer for more details https://stackoverflow.com/a/67199078/4950797

unable to get near protocol transaction status via RPC

Given a transaction https://explorer.near.org/transactions/JBb2DDe3i1CtBwESisLuhxXkWVZpCKYL4J1AdYwAQPsQ
When I query NEAR rpc:
http post https://rpc.mainnet.near.org jsonrpc=2.0 method=tx params:='["JBb2DDe3i1CtBwESisLuhxXkWVZpCKYL4J1AdYwAQPsQ","wasmgit.near"]' id=dontcare
Then I expect to get the transaction status
Instead I get the following response:
{
"error": {
"code": -32000,
"data": "Transaction JBb2DDe3i1CtBwESisLuhxXkWVZpCKYL4J1AdYwAQPsQ doesn't exist",
"message": "Server error"
},
"id": "dontcare",
"jsonrpc": "2.0"
}
source: https://docs.near.org/docs/api/rpc#setup
Querying historical data (older than 5 epochs or ~2.5 days), you may
get responses that the data is not available anymore. In that case,
archival RPC nodes will come to your rescue:
mainnet https://archival-rpc.mainnet.near.org
testnet https://archival-rpc.testnet.near.org
You can see this interface defined in nearcore here.
via near-cli
near --nodeUrl https://archival-rpc.mainnet.near.org \
tx-status JBb2DDe3i1CtBwESisLuhxXkWVZpCKYL4J1AdYwAQPsQ \
--accountId wasmgit.near
via http
http post https://archival-rpc.mainnet.near.org \
jsonrpc=2.0 method=tx \
params:='["JBb2DDe3i1CtBwESisLuhxXkWVZpCKYL4J1AdYwAQPsQ","wasmgit.near"]' \
id=dontcare

Perspective API: Proper way to send requests with auto-detection of language

I am a bit confused on the proper way to send requests using Google's Perspective API.
Sending the following request works:
{"comment":{"text":"yo hamburger"},"languages":["en"],"requestedAttributes":{"TOXICITY":{}}}
In the documentation, it says, "...If you are using a production attribute, language is auto-detected if not specified in the request." So, I tried:
{"comment":{"text":"yo hamburger"},"requestedAttributes":{"TOXICITY":{}}}
And in response, I got a HTTP/1.0 400 Bad Request.
I also tried including all of the languages listed on the documentation page, like this:
{"comment":{"text":"yo hamburger"},"languages":["en","fr","es","de","it","pt"],"requestedAttributes":{"TOXICITY":{}}}
But that also gave me a response of HTTP/1.0 400 Bad Request.
Another attempt was made leaving the array of languages empty, like this:
{"comment":{"text":"yo hamburger"},"languages":[],"requestedAttributes":{"TOXICITY":{}}}
However, it still gave me a response of HTTP/1.0 400 Bad Request.
I was wondering, what is the proper way to send a request to the API and have it auto-detect language?
User x00 provided the path to the solution in the question's comment section. By using curl, I was able to see what was going on.
Here's what was happening:
In this first example, the system worked without error.
CURL:
curl -H "Content-Type: application/json" --data \
'{comment: {text: "yo hamburger"},
languages: ["en"],
requestedAttributes: {TOXICITY:{}} }' \
https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=[API_KEY]
RESPONSE:
{
"attributeScores": {
"TOXICITY": {
"spanScores": [
{
"begin": 0,
"end": 12,
"score": {
"value": 0.050692778,
"type": "PROBABILITY"
}
}
],
"summaryScore": {
"value": 0.050692778,
"type": "PROBABILITY"
}
}
},
"languages": [
"en"
],
"detectedLanguages": [
"tr",
"ja-Latn",
"de",
"en"
]
}
In this second example, the system was indeed auto-detecting language, but since "yo hamburger" was detected as Turkish, it could not provide a solution and instead sent a 400 as the response code.
CURL:
curl -H "Content-Type: application/json" --data \
'{comment: {text: "yo hamburger"},
requestedAttributes: {TOXICITY:{}} }' \
https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=[API_KEY]
RESPONSE:
{
"error": {
"code": 400,
"message": "Attribute TOXICITY does not support request languages: tr",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.commentanalyzer.v1alpha1.Error",
"errorType": "LANGUAGE_NOT_SUPPORTED_BY_ATTRIBUTE",
"languageNotSupportedByAttributeError": {
"detectedLanguages": [
"tr"
],
"attribute": "TOXICITY"
}
}
]
}
}
This next example is more mysterious to me, as the language field for the request is plural, "languages," so it seems you can provide more than one language. However, it said it couldn't support that.
CURL:
curl -H "Content-Type: application/json" --data \
'{comment: {text: "yo hamburger"},
languages:["en","fr","es","de","it","pt"],
requestedAttributes: {TOXICITY:{}} }' \
https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=[API_KEY]
RESPONSE:
{
"error": {
"code": 400,
"message": "Attribute TOXICITY does not support request languages: en,fr,es,de,it,pt",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.commentanalyzer.v1alpha1.Error",
"errorType": "LANGUAGE_NOT_SUPPORTED_BY_ATTRIBUTE",
"languageNotSupportedByAttributeError": {
"requestedLanguages": [
"en",
"fr",
"es",
"de",
"it"
],
"attribute": "TOXICITY"
}
}
]
}
}
In this next example, leaving the languages array empty also provided the auto-detection of language, but again, "yo hamburger" was detected as Turkish, so it could not provide a response.
CURL:
curl -H "Content-Type: application/json" --data \
'{comment: {text: "yo hamburger"},
languages:[],
requestedAttributes: {TOXICITY:{}} }' \
https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=[API_KEY]
RESPONSE:
{
"error": {
"code": 400,
"message": "Attribute TOXICITY does not support request languages: tr",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.commentanalyzer.v1alpha1.Error",
"errorType": "LANGUAGE_NOT_SUPPORTED_BY_ATTRIBUTE",
"languageNotSupportedByAttributeError": {
"detectedLanguages": [
"tr"
],
"attribute": "TOXICITY"
}
}
]
}
}
Noticing that Perspective API would not allow me to choose all of the languages that are provided for the TOXICITY report, I decided to try two languages. The response was the same. Apparently Perspective API rejects the request if multiple languages are specified. Perhaps naming the field "languages" was a thought for the future.
CURL:
curl -H "Content-Type: application/json" --data \
'{comment: {text: "yo hamburger"},
languages: ["en","fr"],
requestedAttributes: {TOXICITY:{}} }' \
https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=[API_KEY]
RESPONSE:
{
"error": {
"code": 400,
"message": "Attribute TOXICITY does not support request languages: en,fr",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.commentanalyzer.v1alpha1.Error",
"errorType": "LANGUAGE_NOT_SUPPORTED_BY_ATTRIBUTE",
"languageNotSupportedByAttributeError": {
"requestedLanguages": [
"en",
"fr"
],
"attribute": "TOXICITY"
}
}
]
}
}
Maybe you're using bad client library or other issue is causing the problem, Here is documentation about client library that in example language is auto-detected without problem. Check that and if not successful provide more details for further investigations.
As I said in the comments, general approach to these kind of issues: use curl. It helps a lot.
To sum up you findings:
auto-detection with a set of languages doesn't seem to work.
the correct way to send a request with auto-detection enabled is
{comment: {text: "some text"}, requestedAttributes: {TOXICITY:{}} }
but sometimes it fails on short texts, especially with slang inside.
So what can be done about it?
The easyest way is to assign some weight to Bad Requests (probably something around 0.5). Anyway, as a response you get the probability and not a definitive answer. So
toxicity score = 1 means "definitely toxic"
toxicity score = 0 means "not toxic at all"
and toxicity score = 0.5 means "we have no idea"
same thing goes for Bad Request - "you have no idea"
and you will get 0.5 from time to time, so you must deal somehow with comments of that score anyway. As well as with network errors etc.
But I would say that a probability of toxicity of a comment that result in LANGUAGE_NOT_SUPPORTED_BY_ATTRIBUTE is higher than 0.5. But it's up to you to decide on the exact number.
As auto-detection doesn't work well with short texts you can bump up probability of correct auto-detection by adding some context into you request: a couple of other comments in the thread, or better yet, a couple of other comments from the same user. Not too big ones and not too small ones.
Make three requests specifying a language. As far as I can tell TOXICITY works only with English, Spanish, and French. On github I've got this reply:
"TOXICITY is currently supported in English (en), Spanish (es), French (fr), German (de), Portuguese (pt), and Italian (it). We will work to remove the contradictions you identified."
Auto-detect by yourself before sending a request. That'll require some effort, but it shouldn't be too hard, given you have much more context available to you than is available to Perspective API (or any other third-party API)
Also
These kind of APIs are not supposed to stay unattended. Fine tuning and moderation on your part is required. Or else we'll end up in the worst-case scenario of algocracy :).
And I think it's a good idea in general to store statistics of toxicity of comments for a user... as well as some manual coefficient. Because for example: Mathematical formulas give high toxicity
I've posted an couple of issues on github, but no reply yet (whating for reply on the second issue). When/If I'll get them I'll update my answer with details.

Is pagination for Youtube API Channel Memberships (sponsors.list) broken?

I'm trying to paginate through a list of results using the youtube API for Channel Memberships (sponsors.list), but the paging and PageTokens don't seem to be working as they are supposed to.
I'm currently developing an application for a user to generate a list of all Members to their channel (using the api for sponsors.list: https://developers.google.com/youtube/v3/live/docs/sponsors/list)
I have a test account, and I've been able to successfully pull the list. However, the test account only has 5 memberships. Since the API can only pull a maximum of 50 results per page, I want to make sure that my app can account for the possibility that the channel will have 50+ sponsors.
So, I've set the results per page to give me just 1, theoretically giving me 5 pages I can then sift through to simulate 50+ members.
The problem arises when I try to page through the results... as the API says, I grab the nextPageToken from the results, and pass it in the next call in the pageToken parameter. However, when I do so, even when testing in the API explorer, I get back an empty list, and no nextPageToken for the next page.
{
"kind": "youtube#sponsorListResponse",
"etag": "\"XpPGQXPnxQJhLgs6enD_n8JR4Qk/UCSC321uKOiUT6GNkcPmkqoH1sY\"",
"pageInfo": {
"totalResults": 0,
"resultsPerPage": 0
},
"items": []
}
Additionally, if I pass a fake pageToken, the results come as if I'd passed no token at all, so it is at least recognizing the nextPageToken I'm passing it.
My google searches have failed me, other than just turning up pages talking about how the pagination is supposed to work... which it obviously isn't. Am I doing something wrong? Or is it indeed broken?
Edit
Here are the API calls I made.
Initial member list pull (After getting the authorization token, etc).
'https://www.googleapis.com/youtube/v3/sponsors?part=snippet&filter=all&maxResults=1' \
--header 'Authorization: Bearer [SECRET_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
Which results in: (I've edited out sensitive info, like [CHANNEL_ID], etc).
{
"kind": "youtube#sponsorListResponse",
"etag": "\"XpPGQXPnxQJhLgs6enD_n8JR4Qk/PRgb6wjx--gdhgTtZ1auDKOony0\"",
"nextPageToken": "GLiawvDS6uEC",
"pageInfo": {
"totalResults": 5,
"resultsPerPage": 1
},
"items": [
{
"kind": "youtube#sponsor",
"etag": "\"XpPGQXPnxQJhLgs6enD_n8JR4Qk/LoD6jhrr94l_4soca-7lx14kyRQ\"",
"snippet": {
"channelId": "[CHANNEL_ID]",
"sponsorDetails": {
"channelId": "[CHANNEL_ID]",
"channelUrl": "[CHANNEL_URL]",
"displayName": "[DISPLAY_NAME]",
"profileImageUrl": "[PROFILE_IMAGE_URL]"
},
"sponsorSince": "2019-04-25T06:36:11.677Z"
}
}
]
}
So I grab the nextPageToken "GLiawvDS6uEC", and drop that into my next call in the pageToken field, as the API instructs.
'https://www.googleapis.com/youtube/v3/sponsors?part=snippet&filter=all&maxResults=1&pageToken=GLiawvDS6uEC' \
--header 'Authorization: Bearer [SECRET_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
And wind up with this depressing result:
{
"kind": "youtube#sponsorListResponse",
"etag": "\"XpPGQXPnxQJhLgs6enD_n8JR4Qk/UCSC321uKOiUT6GNkcPmkqoH1sY\"",
"pageInfo": {
"totalResults": 0,
"resultsPerPage": 0
},
"items": []
}
So, turns out this was an actual problem with the API. Had a friend who knew someone at google, they looked into it, got the problem fixed! It works as intended now! Yay!
That said, if I hadn't had that connection, who knows if this would ever have been solved ;_;
As far as I can see, a nextPageToken of value GLiawvDS6uEC is invalid.
All the page tokens I came across were of a pattern described e.g. by Youtube Data API v3 pageToken for arbitrary page.
The API's documentation itself says nothing about how a page token should look like!
Maybe someone else has better inside on this issue. In any case, I suggest to file a report with Google.

Resources