I'm coding an application that it use MapKit features. When I make request with for loop to get directions for many locations using MapKit's MKDirections , it gives error as "Directions is not available " with following details :
Error Domain=MKErrorDomain Code=3 "Directions Not Available" UserInfo={NSLocalizedFailureReason=Route information is not available at this moment., MKErrorGEOError=-3, MKErrorGEOErrorUserInfo={
details = (
{
intervalType = short;
maxRequests = 50;
"throttler.keyPath" = "app:lszlp.nobetciEczane/0x20200/short(default/any)";
timeUntilReset = 54;
windowSize = 60;
}
);
timeUntilReset = 54; ```
what is the possible causes ??
I've recognized that 2 arguments must be taken into account to avoid this type of error.
Firstly, Apple Map Server doesn't give permission more than one location's request in 60 seconds, so you have to check 2 consecutive location request time.
Second, maximum request number is set to 50 for Apple Map Server as it's written in error definition. So , you have to limit your "for loop" with 50 loops. I could not find any argument why this limitation requires.
with this 2 approaches , the problem has gone.
Related
I am attempting to write a simulation that can read from a config file for a set of apis that each have a set of properties.
I read the config for n active scenarios and create requests from a CommonRequest class
Then those requests are built into scenarios from a CommonScenario
CommonScenarios have attributes that are using to create their injection profiles
That all seems to work no issue. But when I try to use the properties / CommonScenario request to build a set of Assertions it does not work as expected.
// get active scenarios from the config
val activeApiScenarios: List[String] = Utils.getStringListProperty("my.active_scenarios")
// build all active scenarios from config
var activeScenarios: Set[CommonScenario] = Set[CommonScenario]()
activeApiScenarios.foreach { scenario =>
activeScenarios += CommonScenarioBuilder()
.withRequestName(Utils.getProperty("my." + scenario + ".request_name"))
.withRegion(Utils.getProperty("my." + scenario + ".region"))
.withConstQps(Utils.getDoubleProperty("my." + scenario + ".const_qps"))
.withStartQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps").head)
.withPeakQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps")(1))
.withEndQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps")(2))
.withFeeder(Utils.getProperty("my." + scenario + ".feeder"))
.withAssertionP99(Utils.getDoubleProperty("my." + scenario + ".p99_lte_assertion"))
.build
}
// build population builder set by adding inject profile values to scenarios
var injectScenarios: Set[PopulationBuilder] = Set[PopulationBuilder]()
var assertions : Set[Assertion] = Set[Assertion]()
activeScenarios.foreach { scenario =>
// create injection profiles from CommonScenarios
injectScenarios += scenario.getCommonScenarioBuilder
.inject(nothingFor(5 seconds),
rampUsersPerSec(scenario.startQps).to(scenario.rampUpQps).during(rampOne seconds),
rampUsersPerSec(scenario.rampUpQps).to(scenario.peakQps).during(rampTwo seconds),
rampUsersPerSec(scenario.peakQps).to(scenario.rampDownQps) during (rampTwo seconds),
rampUsersPerSec(scenario.rampDownQps).to(scenario.endQps).during(rampOne seconds)).protocols(httpProtocol)
// create scenario assertions this does not work for some reason
assertions += Assertion(Details(List(scenario.requestName)), TimeTarget(ResponseTime, Percentiles(4)), Lte(scenario.assertionP99))
}
setUp(injectScenarios.toList)
.assertions(assertions)
Note scenario.requestName is straight from the build scenario
.feed(feederBuilder)
.exec(commonRequest)
I would expect the Assertions get built from their scenarios into an iterable and pass into setUp().
What I get:
When I print out everything the scenarios, injects all look good but then I print my "assertions" and get 4 assertions for the same scenario name with 4 different Lte() values. This is generalized but I configured 12 apis all with different names and Lte() values, etc.
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(500.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(1500.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(1000.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(2000.0)
After the simulation the assertions all run like normal:
Request Name: 4th percentile of response time is less than or equal to 500.0 : false
Request Name: 4th percentile of response time is less than or equal to 1500.0 : false
Request Name: 4th percentile of response time is less than or equal to 1000.0 : false
Request Name: 4th percentile of response time is less than or equal to 2000.0 : false
Not sure what I am doing wrong when building my assertions. Is this even a valid approach? I wanted to ask for help before I abandon this for a different approach.
Disclaimer: Gatling creator here.
It should work.
Then, there are several things I'm super not found of.
assertions += Assertion(Details(List(scenario.requestName)), TimeTarget(ResponseTime, Percentiles(4)), Lte(scenario.assertionP99))
You shouldn't be using the internal AST here. You should use the DSL like you've done for the injection profile.
var assertions : Set[Assertion] = SetAssertion
activeScenarios.foreach { scenario =>
You should use map on activeScenarios (similar to Java's Stream API), not use a mutable accumulator.
val activeScenarios = activeApiScenarios.map(???)
val injectScenarios = activeScenarios.map(???)
val assertions = activeScenarios.map(???)
Also, as you seem to not be familiar with Scala, you should maybe switch to Java (supported for more than 1 year).
I'm programming a contacts export from our database to Google Contacts using the Google People API. I'm programming the requests over URL via Google Apps Script.
The code below - using https://people.googleapis.com/v1/people:batchCreateContacts - works for 13 to about 15 single requests, but then Google returns this error message:
Quota exceeded for quota metric 'Critical read requests (Contact and Profile Reads)' and limit 'Critical read requests (Contact and Profile Reads) per minute per user' of service 'people.googleapis.com' for consumer 'project_number:***'.
For speed I send the request with batches of 10 parallel requests.
I have the following two questions regarding this problem:
Why, for creating contacts, I would hit a quotum regarding read requests?
Given the picture link below, why would sending 2 batches of 10 simultaneous requests (more precise: 13 to 15 single requests) hit that quotum limit anyway?
quotum limit of 90 read requests per user per minute as displayed on console.cloud.google.com
Thank you for any clarification!
Further reading: https://developers.google.com/people/api/rest/v1/people/batchCreateContacts
let payloads = [];
let lengthPayloads;
let limitPayload = 200;
/*Break up contacts in payload limits*/
contacts.forEach(function (contact, index) /*contacts is an array of objects for the API*/
{
if(!(index%limitPayload))
{
lengthPayloads = payloads.push(
{
'readMask': "userDefined",
'sources': ["READ_SOURCE_TYPE_CONTACT"],
'contacts': []
}
);
}
payloads[lengthPayloads-1]['contacts'].push(contact);
}
);
Logger.log("which makes "+payloads.length+" payloads");
let parallelRequests = [];
let lengthParallelRequests;
let limitParallelRequest = 10;
/*Break up payloads in parallel request limits*/
payloads.forEach(function (payload, index)
{
if(!(index%limitParallelRequest))
lengthParallelRequests = parallelRequests.push([]);
parallelRequests[lengthParallelRequests-1].push(
{
'url': "https://people.googleapis.com/v1/people:batchCreateContacts",
'method': "post",
'contentType': "application/json",
'payload': JSON.stringify(payload),
'headers': { 'Authorization': "Bearer " + token }, /*token is a token of a single user*/
'muteHttpExceptions': true
}
);
}
);
Logger.log("which makes "+parallelRequests.length+" parallelrequests");
let responses;
parallelRequests.forEach(function (parallelRequest)
{
responses = UrlFetchApp.fetchAll(parallelRequest); /* error occurs here*/
responses = responses.map(function (response) { return JSON.parse(response.getContentText()); });
responses.forEach(function (response)
{
if(response.error)
{
Logger.log(JSON.stringify(response));
throw response;
}
else Logger.log("ok");
}
);
Output of logs:
which makes 22 payloads
which makes 3 parallelrequests
ok (15 times)
(the error message)
I had raised the same issue in Google's issue tracker.
Seems that the single BatchCreateContacts or BatchUpdateContacts call consumes six (6) "Critical Read Request" quota per request. Still did not get an answer why for creating/updating contacts, we are hitting the limit of critical read requests.
Quota exceeded for quota metric 'Critical read requests (Contact and Profile Reads)' and limit 'Critical read requests (Contact and Profile Reads) per minute per user' of service 'people.googleapis.com' for consumer 'project_number:***'.
There are two types of quotas: project based quotas and user based quotas. Project based quotas are limits placed upon your project itself. User based quotes are more like flood protection they limit the number of requests a single user can make over a period of time.
When you send a batch request with 10 requests in it it counts as ten requests not as a single batch request. If you are trying to run this parallel then you are defiantly going to be overflowing the request per minute per user quota.
Slow down this is not a race.
Why, for creating contacts, I would hit a quota regarding read requests?
I would chock it up to a bad error message.
Given the picture link below, why would sending 13 to 15 requests hit that quota limit anyway? ((there are 3 read requests before this code)) quota limit of 90 read requests per user per minute as displayed on console.cloud.google.com
Well you are sending 13 * 10 = 130 per minute that would exceed the request per minute. There is also no way of knowing how fast your system is running it could be going faster as it will depend upon what else the server is doing at the time it gets your requests what minute they are actually being recorded in.
My advice is to just respect the quota limits and not try to understand why there are to many variables on Googles servers to be able to tack down what exactly a minute is. You could send 100 requests in 10 seconds and then try to send another 100 in 55 seconds and you will get the error you could also get the error after 65 seconds depend upon when they hit the server and when the server finished processing your initial 100 requests.
Again slow down.
[Disclaimer: I have published this question 3 weeks ago in biostars, with no answers yet. I really would like to get some ideas/discussion to find a solution, so I post also here.
biostars post link: https://www.biostars.org/p/447413/]
For one of my projects of my PhD, I would like to access all variants, found in ClinVar db, that are in the same genomic position as the variant in each row of the input GSVar file. The language constraint is Python.
Up to now I have used entrezpy module: entrezpy.esearch.esearcher. Please see more for entrezpy at: https://entrezpy.readthedocs.io/en/master/
From the entrezpy docs I have followed this guide to access UIDs using the genomic position of a variant: https://entrezpy.readthedocs.io/en/master/tutorials/esearch/esearch_uids.html in code:
# first get UIDs for clinvar records of the same position
# credits: credits: https://entrezpy.readthedocs.io/en/master/tutorials/esearch/esearch_uids.html
chr = variants["chr"].split("chr")[1]
start, end = str(variants["start"]), str(variants["end"])
es = entrezpy.esearch.esearcher.Esearcher('esearcher', self.entrez_email)
genomic_pos = chr + "[chr]" + " AND " + start + ":" + end # + "[chrpos37]"
entrez_query = es.inquire(
{'db': 'clinvar',
'term': genomic_pos,
'retmax': 100000,
'retstart': 0,
'rettype': 'uilist'}) # 'usehistory': False
entrez_uids = entrez_query.get_result().uids
Then I have used Entrez from BioPython to get the available ClinVar records:
# process each VariationArchive of each UID
handle = Entrez.efetch(db='clinvar', id=current_entrez_uids, rettype='vcv')
clinvar_records = {}
tree = ET.parse(handle)
root = tree.getroot()
This approach is working. However, I have two main drawbacks:
entrezpy fulls up my log file recording all interaction with Entrez making the log file too big to be read by the hospital collaborator, who is variant curator.
entrezpy function, entrez_query.get_result().uids, will return all UIDs retrieved so far from all the requests (say a request for each variant in GSvar), thus this space inefficient retrieval. That is the entrez_uids list will quickly grow a lot as I process all variants from a GSVar file. The simple solution that I have implenented is to check which UIDs are new from the current request and then keep only those for Entrez.fetch(). However, I still need to keep all seen UIDs, from previous variants in order to be able to know which is the new UIDs. I do this in code by:
# first snippet's first lines go here
entrez_uids = entrez_query.get_result().uids
current_entrez_uids = [uid for uid in entrez_uids if uid not in self.all_entrez_uids_gsvar_file]
self.all_entrez_uids_gsvar_file += current_entrez_uids
Does anyone have suggestion(s) on how to address these two presented drawbacks?
I am using omnet++-5.4.1, veins-4.7.1 and sumo-0.25.0 to simulate vehicle frame transmission.
About the behavior of the EDCA in the mac layer in WAVE, in my understanding the waiting time to send can be obtained by the following calculation.
waiting time = AIFS[AC] + backoff
AIFS[AC] = SIFS + AIFSN[AC] * slotlength
However, in the startContent function of Mac 1609_4.cc, it is written as follows
if (idleTime > possibleNextEvent) {
DBG_MAC << "Could have already send if we had it earlier" << std::endl;
//we could have already sent. round up to next boundary
simtime_t base = idleSince + DIFS;
possibleNextEvent = simTime() - simtime_t().setRaw((simTime() - base).raw() % SLOTLENGTH_11P.raw()) + SLOTLENGTH_11P;
}
Even during simulation, transmission is performed without waiting for the time calculated just after the transmission request has occurred.
As described above, it is considered that the operation of the original EDCA (CSMA / CA) has not been performed and the busyness of the channel is not sensed.
I do not understand enough about this Mac layer? Please let me know if I missed some information.
Thank you.
Im reciveing 403 User Rate Limit Exceeded error making querys but I'm sure I'm not exceding.
In the past I've reach the rate limimt doing inserts and It was reflected in the job list as
[errorResult] => Array
(
[reason] => rateLimitExceeded
[message] => Exceeded rate limits: too many imports for this project
)
But in this case the jobs-list doesn't reflect the query (nor error or done), and studing the job-list i haven't reach the limits or have been close to reach it (no more than 4 concurrent querys and each processing 692297 Bytes)
I've the billing active, and I've make only 2.5K querys in the las 28 days.
Edit: The user limit is set up to 500.0 requests/second/user
Edit: Error code recived
User Rate Limit Exceeded User Rate Limit Exceeded
Error 403
Edit: code that I use to make the query jobs and get results
function query_data($project,$dataset,$query,$jobid=null){
$jobc = new JobConfigurationQuery();
$query_object = new QueryRequest();
$dataset_object = new DatasetReference();
$dataset_object->setProjectId($project);
$dataset_object->setDatasetId($dataset);
$query_object->setQuery($query);
$query_object->setDefaultDataset($dataset_object);
$query_object->setMaxResults(16000);
$query_object->setKind('bigquery#queryRequest');
$query_object->setTimeoutMs(0);
$ok = false;
$sleep = 1;
while(!$ok){
try{
$response_data = $this->bq->jobs->query($project, $query_object);
$ok = true;
}catch(Exception $e){ //sleep when BQ API not avaible
sleep($sleep);
$sleep += rand(0,60);
}
}
try{
$response = $this->bq->jobs->getQueryResults($project, $response_data['jobReference']['jobId']);
}catch(Exception $e){
//do nothing, se repite solo
}
$tries = 0;
while(!$response['jobComplete']&&$tries<10){
sleep(rand(5,10));
try{
$response = $this->bq->jobs->getQueryResults($project, $response_data['jobReference']['jobId']);
}catch(Exception $e){
//do nothing, se repite solo
}
$tries++;
}
$result=array();
foreach($response['rows'] as $k => $row){
$tmp_row=array();
foreach($row['f'] as $field => $value){
$tmp_row[$response['schema']['fields'][$field]['name']] = $value['v'];
}
$result[]=$tmp_row;
unset($response['rows'][$k]);
}
return $result;
}
Is there any other rate limits? or it is a bug?
Thanks!
You get this error trying to import CSV files right?
It could be one of these reasons:
Import Requests
Rate limit: 2 imports per minute
Daily limit: 1,000 import requests per day (including failures)
Maximum number of files to import per request: 500
Maximum import size per file: 4GB2
Maximum import size per job: 100GB2
The query() call is, in fact, limited by the 20-concurrent limit. The 500 requests / second / user limit in developer console is somewhat misleading -- this is just the number of total calls (get, list, etc) that can be made.
Are you saying that your query is failing immediately and never shows up in the job list?
Do you have the full error that is being returned? I.e does the 403 message contain any additional information?
thanks
I've solved the problem by using only one server to make the requests.
Looking what I was doing different in the night cronjobs (that never fail) the only diference was I was using only one client in one server instead of using diferent clients in 4 diferent servers.
Now I have only one script in one server that manages the same number of querys and now it never gets the User Rate Limit Exceded error.
I think there is a bug managing many clients or many active IPs at a time, althought the total number of threads never exceds 20.