google maps api -> set max cruising speed - performance

how can I set the max cruising speed, if I calc a new route?
I use the Google Maps API at my site for trucks and want to set a "max speed limit" by 80 km/h. Any idea?

The Google Maps API does not provide speedlimit information or filtering, so you can't.

It should. Any travel guidance system which provides a time needs to take all sorts of factors into the calculation - just see how effective TomTom IQ routes is! It uses not only speed limits but actual real-time speed data gathered from Tomtom users. Also, I tow a caravan and so a 70mph speed on a UK motorway is not applicable, but I can set a maximum speed of 60mph (the max in the UK on motorways, if towing), which makes the transit times pretty close to reality most of the time. Google maps times are meaningless.
KC

Related

Reaching quota too soon on Youtube Data API V3 - optimizing search.list [duplicate]

I'm building a pretty large app for a client that is going to aggregate feeds from various sources. My client estimates around 900 follow-able users will be in this system to start out, with more being added over time. He wants to update the feed data every 15 minutes, so we would need to update one user feed per second, assuming 900 feeds and a 15 minute TTL. As the requests take a few seconds to complete, we would then need to load balance across a few threads to tackle the queue asynchronously.
Should I be worried about quota errors or hitting any kind of limitations? If so, what are our options?
I've already read their help pages and documentation, but it's very vague; I need concrete numbers. It's not feasible to load test their API to figure out the limitation.
Version 3 of the YouTube Data API has concrete quota numbers listed in the Google API Console where you register for your API Key. You can use 10,000 units per day. Projects that had enabled the YouTube Data API before April 20, 2016, have a default quota of 50,000,000 per day.
You can read about what a unit is here:
https://developers.google.com/youtube/v3/getting-started#quota
A simple read operation that only retrieves the ID of each returned resource has a cost of approximately 1 unit.
A write operation has a cost of approximately 50 units.
A video upload has a cost of approximately 1600 units.
If you hit the limits, Google will stop returning results until your quota is reset. You can apply for more than 1,000,000 requests per day, but you will have to pay for those extra requests.
There is a calculator provided by YouTube to check your usage. It is a good tool to estimate your usage.
https://developers.google.com/youtube/v3/determine_quota_cost
If you need to make more requests than allotted, you can request a higher quota here: https://support.google.com/youtube/contact/yt_api_form

Does parse pricing charge by plan or actual usage?

Let's say I buy 1000 req/s, but only use 30 req/s. Which do I get charged for? The docs are unclear to me "pro-rated by the hour." How can I spend the minimal amount while ensuring all requests fulfilled?
You are charged by plan. If you take the 1000 req/s offer you will pay for 1000 req/s even if you only use 1.
The best way to handle that is to monitor the requests and change the plan when you get "too close" to the limit. I cannot give you a number or percentage as it heavily depends on your app. If your app evolves very slowly you can probably wait to reach 90%+ of the limit. On the other hand, If the app evolves very fast and you know you might get a lot of new users at any moments because your app is getting trending, you might want to be careful and start changing your plan at 50 or 60%. By default parse warns you when you reach 75%. At the end it's all about how your app is growing. You have an Analytics section in your app dashboard on Parse that will help you to monitor the requests and the growth of the app.

Use traditional post, Ajax post or Channel API for my own Like button?

I'm developing a mobile app with 40 million users per day.
The app will show articles to the user that they can choose to read, no image, just plain text. The user can pull to refresh to get new articles.
I would like to implement the like button to each individual article (my own like button not Facebook). Assume that each client click 100 like per person per day it will be equal to 40M x 100 = 4000 M time of data transfer.
I'm a newbie with no experience with big project before. What is the best approach that suit my project. I found Google Channel API is 0.0001 dollars per channel created which is 80M x 0.0001 = 8000 USD per day (assume there are 2 connection per person) which is quite expensive. Or there is other way to do this? ex. Ajax or Traditional post. My app don't need real-time. which one is less resource consumption? Can someone please guide me. I really need help.
I'm plan to use google app engine for this project.
A small difference in efficiency would multiply to a significant change in operational costs at those volumes. I would not blindly trust theoretical claims made by documentation. It would be sensible to build and test each alternative design and ensure it is compatible with the rest of your software. A few days of trials with several thousand simulated users will produce valuable results at a bearable cost.
Channels, Ajax and conventional web requests are all feasible at the conceptual level of your question. Add in some monitoring code and compare the results of load tests at various levels of scale. In addition to performance and cost, the instrumentation code should also monitor reliability.
I highly doubt your app will get 40 million users a day, and doubt even more that each of those will click Like ten times a day.
But I don't understand why clicking Like would result in a lot of data transfer. It's a simple Ajax request, which wouldn't even need to return anything than an empty response, with a 200 code for success and a 400 for failure.
I would experiment with different options on a small scale first, to get some data from which you can extrapolate to calculate your costs. However, a simple ajax request with a lightweight handler is likely to be cheaper than Channel API.
If you are getting 40m daily users, reading at least 100 articles, then making 100 likes, I'm guessing you will have something exceeding 8bn requests a day. On that basis, your instance costs are likely to be significant before even considering a like button. At that volume of requests, how you handle the request on the server side will be important in managing your costs.
Using tools like AppStats, Chrome Dev Tools and Apache Jmeter will help you get a better view on your response times, instance & bandwidth costs and user experience before you scale up.

How can I calculate my YouTube API usage?

I'm building a pretty large app for a client that is going to aggregate feeds from various sources. My client estimates around 900 follow-able users will be in this system to start out, with more being added over time. He wants to update the feed data every 15 minutes, so we would need to update one user feed per second, assuming 900 feeds and a 15 minute TTL. As the requests take a few seconds to complete, we would then need to load balance across a few threads to tackle the queue asynchronously.
Should I be worried about quota errors or hitting any kind of limitations? If so, what are our options?
I've already read their help pages and documentation, but it's very vague; I need concrete numbers. It's not feasible to load test their API to figure out the limitation.
Version 3 of the YouTube Data API has concrete quota numbers listed in the Google API Console where you register for your API Key. You can use 10,000 units per day. Projects that had enabled the YouTube Data API before April 20, 2016, have a default quota of 50,000,000 per day.
You can read about what a unit is here:
https://developers.google.com/youtube/v3/getting-started#quota
A simple read operation that only retrieves the ID of each returned resource has a cost of approximately 1 unit.
A write operation has a cost of approximately 50 units.
A video upload has a cost of approximately 1600 units.
If you hit the limits, Google will stop returning results until your quota is reset. You can apply for more than 1,000,000 requests per day, but you will have to pay for those extra requests.
There is a calculator provided by YouTube to check your usage. It is a good tool to estimate your usage.
https://developers.google.com/youtube/v3/determine_quota_cost
If you need to make more requests than allotted, you can request a higher quota here: https://support.google.com/youtube/contact/yt_api_form

Which are the most Relevant Performance Parameters / Measures for a Web Application

We are re-implementing(yes from scratch) a web application which is currently in production. We have decided to start doing some performance tests on the new app, to get some early information of the capabilities.
As the old application is currently in production and has a good performance we would like to extract some performance parameters, and then use this parameters as a reference or base goal of the performance of the new application.
Which do you think are the most relevant performance parameters we should be obtaining from the current production application?
Thanks!
Determine what pages are used the most.
Measure a latency histogram for the total time it takes to answer the request. Don't just measure the mean, measure a histogram.
From the histogram you can see how many % of requests have which latency in milliseconds. You can choose to key performance indicators by takes the values for 50% and 95%. This will tell you the average latency and the worst latency (for the worst 10% of requests).
Those two numbers alone will bring you great confidence regarding the experience your users will have.
Throughput does not matter for users, but for capacity planning.
I also recommend that you track the performance values over time and review them twice a year.
Just in case you need an HTTP client, there is weighttp, a multi-threaded client written by the guys from Lighttpd.
It has the same syntax used by ApacheBench, but weighttp lets you use several client worker threads (AB is single-threaded so it cannot saturate a modern SMP Web server).
The answer of "usr" is valid, but you can as well record the minnimum, average and maximum latencies (that's useful to see in which range they play). Here is a public-domain C program to automate all this on a given concurrency range.
Disclamer: I am involved in the development of this project.

Resources