I've made my request week ago, and still have status:
"id": "304761766596473",
"start_ts": "2017-10-16T12:37:48+0000",
"end_ts": "2017-10-17T12:37:48+0000",
"status": "SCHEDULED",
What does it mean? Should I wait my data or have to do it again?
Related
I'm working with Google Auto ML Natural Language API.
I have already a trained model.
In the beginning, when I trained the model, it was deployed and everything was fine. According with Google's new rules from 22th January 2019, models that have no prediction traffic for 7 or more weeks will be silently undeployed and archived.
At moment, I can't predict any result with that trained model, because it is undeployed as it was probably unused for 7 weeks.
Also according to Google if id like to continue using the model I should redeploy it using the Deploy API.
https://cloud.google.com/natural-language/automl/docs/models#deploying_or_undeploying_a_model
I try to redeploy the model and I get an error so I can't make any prediction.
How can I deploy a model, without errors, in order to begin predict results?
So, I'll show the steps that I made to try to solve this problem:
Run deploy request with right data.
Run operations request
Wait for deployment to finish
Run operations request again
Show list of models that I have (It's UNDEPLOYED)
1
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/models/{Model ID}:deploy
{
"name": "projects/{project}/locations/{location}/operations/{Model ID}",
"metadata": {
"#type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
"createTime": {Time},
"updateTime": {Time}
}
}
2 and 4
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/operations
"operations": [
{
"name": "projects/{project}/locations/{location}/operations/{Model ID}",
"metadata": {
"#type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
"createTime": {Time},
"updateTime": "{Time},
"progressPercent": 100
},
"done": true,
"error": {
"code": 4
}
}
]
5
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/models
"model": [
{
"name": "projects/{project}/locations/{location}/models/{Model ID}",
"displayName": {name},
"datasetId": {dataset id},
"createTime": {time},
"deploymentState": "UNDEPLOYED",
"updateTime": {time},
"textClassificationModelMetadata": {}
}
]
So, I was expecting 0 errors in operations request, when the model finished the deployment progress, but it shows an error code 4. I have searched this error code 4 in this provided enum: https://github.com/googleapis/googleapis/blob/master/google/rpc/code.proto
For error code 4:
// The deadline expired before the operation could complete. For operations
// that change the state of the system, this error may be returned
// even if the operation has completed successfully. For example, a
// successful response from a server could have been delayed long
// enough for the deadline to expire.
//
// HTTP Mapping: 504 Gateway Timeout
DEADLINE_EXCEEDED = 4;
I don't know why this timeout is happening.
I already search in Quotas Limits, but everything is fine.
This problem should be resolved now. Sorry about the inconvenience, but your model should be deployable now. Please try and write back if you still see an issue.
I am attempting some simple tests on the Google Speech API, and when my server makes a request to this url (below), I get the 404. that's an error response. Not sure why.
https://speech.googleapis.com/v1/speech:recognize?key=[MY_API_KEY]
The body of my request looks like this:
{
"config": {
"languageCode": "en-US",
"encoding": "LINEAR16",
"sampleRateHertz": 16000,
"enableWordTimeOffsets": true,
"speechContexts": [{
"phrases": ["Some", "Helpful", "Phrases"]
}]
},
"audio":{
"uri":"gs://mydomain.com/my_file.mp3"
}
}
And here is the response:
As you can see, that is a valid resource path, unless I'm totally mistaken about something (I'm sure I am): https://cloud.google.com/speech-to-text/docs/reference/rest/v1/speech/recognize
Update 1:, Whenever I try this with the Google API explorer tool, I get this quota exceeded message (even though I have not yet issued a successful request to the API).
{
"error": {
"code": 429,
"message": "Quota exceeded for quota metric 'speech.googleapis.com/default_requests' and limit 'DefaultRequestsPerMinutePerProject' of service 'speech.googleapis.com' for consumer '[MY_API_KEY]'.",
"status": "RESOURCE_EXHAUSTED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.Help",
"links": [
{
"description": "Google developer console API key",
"url": "https://console.developers.google.com/project/[my_project_id]/apiui/credential"
}
]
}
]
}
}
Update 2: Interestingly, I was able to get some 200 ok's using the Restlet client, but even in those cases, the response body is empty (see screenshot below)
I have made a test by using the exact URL and Body content you added to the post, however, I was able to execute the API call correctly.
I noticed that if I add some extra character to the URL, it fails with the same 400 error since it doesn't exist. I would suggest you to verify that the URL of your request doesn't contain a typo and that the client you use is executing the API call correctly. Also, ensure that your calling code is not encoding the url, which could cause issues given the colon : that appears in the url.
I recommend you to perform this test by using the Try this API tool directly or Restlet client which are the ones that I used to replicate this scenario.
I am using Microsoft Graph. I try to add two attachments by
POST /me/messages/{messageId}/attachment
{
"#odata.type": "#microsoft.graph.fileAttachment",
"name": "1.txt",
"contentBytes": "SGVsbG8gd29ybGQh"
}
POST /me/messages/{messageId}/attachment
{
"#odata.type": "#microsoft.graph.fileAttachment",
"name": "2.txt",
"contentBytes": "SGVsbG8gd29ybGQhIQ=="
}
It gave me 412 (Precondition Failed) error when I add these two attachments at same time.
{
"code": "ErrorIrresolvableConflict",
"message": "The send or update operation could not be performed because the change key passed in the request does not match the current change key for the item., Cannot save changes made to an item to store.SaveStatus: IrresolvableConflict\r\nPropertyConflicts:\r\n",
"innerError": {
"request-id": "20e95141-5d2d-41e3-8eed-3bbd24bcf52a",
"date": "2017-11-28T07:18:45"
}
}
Right now the walkaround way is delaying the second POST around 100 milliseconds. If less than 100 milliseconds, it will be more likely to fail. (The chance to fail might also be related with the size of attachment, I didn't do further test)
But if I have 10 attachments, between each two POSTs, there will be 100 milliseconds delay.
BTW, I saw this issue even exists in Outlook client, when people send mail, they got same error: check here. So it might be a server issue.
[Just move from my original question to answer]
I've been using the Google Genomics API for about a day now. I've successfully called many of the APIs like Datasets.list, Datasets.get, and even Readsets.search but I'm having a problem with the Callsets.search.
I'm making POST request to:
POST https://www.googleapis.com/genomics/v1beta/callsets/search?key=MY_KEY_HERE
And my request body is:
{
"datasetIds" : [
"376902546192"
]
}
But the response I'm getting back is:
{
"error" : {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Unknown field name: datasetIds",
"locationType": "other",
"location": ""
}
],
"code": 400,
"message": "Unknown field name: datasetIds"
}
}
According to the documentation: https://developers.google.com/genomics/v1beta/reference/callsets/search datasetIds is a perfectly valid parameter.
The crazy thing that's perplexing me is this identical request works just fine on the readsets/search endpoint but not the callsets/search endpoint? I'm almost wondering if it's a bug in the API. Can anyone help?
Received this from Google:
The variants and callsets APIs just went
through some breaking changes so that they'll be compliant with GA4GH
v0.5 when they go fully public.
All breaking changes should be done now - and I'll try to get all the
docs and code samples updated today or tomorrow.
Until then, you can see the real parameters in the API explorer (it
can't lie :) - in this case, the datasetId field has now changed to
"variantSetIds" (still using that same value, just a rename)
I just tested it, and it works. Below are the results:
$ java -jar target/genomics-tools-client-java-v1beta.jar searchcallsets --dataset_id 376902546192
Getting call sets from: 1000 Genomes
{"created":"1410541777431","id":"376902546192-0","name":"HG00345","sampleId":"HG00345","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-1","name":"HG00369","sampleId":"HG00369","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-2","name":"HG01085","sampleId":"HG01085","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-3","name":"HG01107","sampleId":"HG01107","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-4","name":"NA12347","sampleId":"NA12347","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-5","name":"NA18579","sampleId":"NA18579","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-6","name":"HG00372","sampleId":"HG00372","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-7","name":"HG01134","sampleId":"HG01134","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-8","name":"NA18532","sampleId":"NA18532","variantSetIds":["376902546192"]}
{"created":"1410541777431","id":"376902546192-9","name":"NA18597","sampleId":"NA18597","variantSetIds":["376902546192"]}
Hope it helps,
Paul
Let's say I have an endpoint user/1/results and I want to upload multiple results at a time.
So I send it JSON like:
{
"data": [
{
"date": "2014-02-14 03:15:41",
"score": 18649,
"time": 42892
},
{
"date": "2013-11-18 09:21:46",
"score": 7856,
"time": 23568.8
}]
}
Let's say time needs to be an integer, so the second entity fails validation.
What's the best thing to do:
Fail both, nothing saves, respond with error message..
Save first entity, respond with error message.
In either case, what would an error message look like? i.e. how/does it specify that it's the second entity that fails validation.
I think you should fail both and respond with an error message because it might be cumbersome again to track the remaining results.
Error message should give the details of failing location. for example if it fails at the second one then specify it in json response.