I'm trying the performance of a mobile app storing images in Azure Blob Storage.
When profiling my app I noticed a storage query with a near 3s Time to First Byte. I downloaded the related Azure Storage log file and noticed that both the End to End latency as well as the Server Latency where near 3s. The SLA says latency should be in range of ms.
Note that I am the only user of my app as I'm stil developing it.
Some infos:
this is a general purpose storage account located in West Europe, North Europe
I'm based in France
I use MS Storage SDK to query the storage account
It uses Shared Access Signature (SAS)
An example of query: https://{myaccoount}.blob.core.windows.net/{Container}/{filename.jpg}?sv=2015-04-05&sr=c&sig=wEtcqKsJRl5ouxUceTTXgzVLk7bDoMvLJITlPTLEFRo%3D&se=2016-09-17T16%3A15%3A16Z&sp=r&api-version=2015-07-08&randomguid=3dde0ba3a14047eaba0d186902fee650
Metrics are automatically written in Tables and logs in Blobs.
It's not a network issue as ServerLatency is high (see doc)
Any suggestions on how to improve latency and why it's not in ms?
Related
I want to create an app for uploading videos to youtube. So I used youtube data API. And I will serve a service for my users to uploading video youtube.
And Official documents says that:
https://developers.google.com/youtube/v3/getting-started?hl=tr#calculating-quota-usage
Google calculates your quota usage by assigning a cost to each
request. Different types of operations have different quota costs. For
example:
A read operation that retrieves a list of resources -- channels,
videos, playlists -- usually costs 1 unit. A write operation that
creates, updates, or deletes a resource usually has costs 50 units. A
search request costs 100 units. A video upload costs 1600 units. The
Quota costs for API requests table shows the quota cost of each API
method. With these rules in mind, you can estimate the number of
requests that your application could send per day without exceeding
your quota.
Is this quotas for a application level or user level. If it is for application level, In this way I have 6 video upload credit from youtube?
What is the clear explanation for this case? Is there any difference about app level quota between user level quota?
The quota is accounted per Google project. That is that each Google project has allocated an amount of daily quota (by default 10000 units) and each API call (being it through an API key or through an access token obtained upon completing successfully an OAuth 2.0 authentication/authorization flow) is deducted out of that quota amount.
Thus, by means of a given Google project, one given application -- if granted permission by several users to access their YouTube channel upon the successful completions of OAuth 2.0 authentication/authorization flows -- could well upload videos to multiple channels.
But, as you noted, in case of one having allocated an amount of 10000 units of quota to his/her Google project, the number of videos that may be uploaded on any given day cannot exceed six (if not counting the other API calls the application may issue).
Of course there's the possibility to apply for quota extensions (by filling in this form); but be aware of the fact that, according to the experience of the users of this forum, the answer from Google does not arrive shortly.
These are application level quotas. When your application runs and you authorize a user the user uploads a video to their account.
If we look at the quota for my system
My application itself has a quota limit of 10000. but each user can max use 180,000 quota. Which is useless as my application itself can only do 10000.
My application itself can use 1,800,000 per minute but again its useless as the total for the application is 10,000.
Intro to YouTube API and cost based quota for beginners 2021.
I am trying to create a Google Compute VM Instance which will host my website, the traffic to this website will be coming mostly from asia, so which region should I select for my compute VM Instance.
How selecting of region will effect on the pricing and performance?
Have a look at the Best practices for Compute Engine regions selection section Factors to consider when selecting regions:
Latency
The main factor to consider is the latency your user experiences.
However, this is a complex problem because user latency is affected by
multiple aspects, such as caching and load-balancing mechanisms.
In enterprise use cases, latency to on-premises systems or latency for
a certain subset of users or partners is more critical. For example,
choosing the closest region to your developers or on-premises database
services interconnected with Google Cloud might be the deciding
factor.
For example you can serf some sites located and Asia and then compare your experience to sites located in US - you'll notice significant difference in response caused by latency. The same with your site - it'll be less responsive. You should set up your VM instance as close to your customers as possible.
To estimate pricing check resources below:
Pricing
Google Cloud resource costs differ by region. The following resources
are available to estimate the price:
Compute Engine pricing
Pricing calculator
Google Cloud SKUs
Billing API
If you decide to deploy in multiple regions, be aware that there are
network egress charges for data synced between regions.
In addition, you can find monthly estimate cost in Create a new instance wizard as well - try to set different regions and you'll get the numbers.
If your customers located in different regions you can try Google Cloud CDN:
Cloud CDN (Content Delivery Network) uses Google's globally
distributed edge points of presence to cache HTTP(S) load balanced
content close to your users. Caching content at the edges of Google's
network provides faster delivery of content to your users while
reducing serving costs.
I am looking for free solution to image hosting with CDN. I got website on small paid hosting and there will be lot of image galleries which I would like to upload to some cloud like Google Drive and use cloud's CDN to link images on my web. Any recommendations for free solution ?
CDN and cloud storage like Google Drive are two different things.
A CDN can be defined as:
A content delivery network or content distribution network is a
geographically distributed network of proxy servers and their data
centers. The goal is to provide high availability and high performance
by distributing the service spatially relative to end-users.
Where as cloud storage services provides highly available and secure storage space over the cloud. Here is a link which explains the difference of these two in terms of AWS(CloudFront vs S3).
If your website traffic is moderate and you want to use free CDN, then you may signup for AWS free tier. The free tier gives you 50 GB Data Transfer Out and 2,000,000 HTTP and HTTPS Requests for Amazon CloudFront(AWS CDN) each month for one year. Here's a tutorial for getting started with AWS CloudFront
If you intended to use cloud storage services then also the free AWS tier provides you with 5 GB of space in AWS S3 for 1 year.
Apart from AWS free tier you may also like to checkout free Microsoft Azure or Google Cloud Platform. Levering these free tier resources it's even possible to host your current website on these platforms almost for free given the usage is within free tier limits.
I have blob storage and app service in our Azure account.
I was uploaded by app service 200GB from my local PC to blob storage.
Data has been charged (14 EUR) as DATA TRANSFER OUT - Zone 1. I don't understand why it happened.
I previously thought that this type of data transfer is IN TO Zone 1 and is free (no charged).
I thought that all uploads to storage is free and all downloads from storage are charged.
Is it correct pricing ?
In case you use some kind of replicated storage as described here the outgoing replication traffic which flows across regions will be charged to you (described here at the bottom).
What is Geo-Replication Data Transfer Charge? When you write data into
GRS accounts, that data will be replicated to another Azure region.
The Geo-Replication Data Transfer charge is the bandwidth cost of
replicating that data to another Azure region.
SQL Azure is responding at 90ms avg
IIS hosted in Azure VM is responding at 2000ms avg!
What can it be done to improve the network speed of the Azure VM?
I have a webapi app hosted in an azure virtual machine. This app links to 20 SQL Azure databases using the elastic scale client API.
Response times from SQL Azure are good: 90ms avg on 900 simultaneous users running queries against the webapi. I know SQL Azure is responding quickly because I am logging response times at the controller level in my webapi including json deserialization of the response object.
However, in my load test I'm consistently getting response times of right around 2000ms!
So, there's a 20 time discrepancy between what SQL Azure is doing to what IIS in the virtual machine is returning. The virtual machine network speed is the bottleneck now and I can't figure out how to solve it.
I have looked through several posts and fixed the following:
Ensure power management is set for performance instead of balanced.
However, slow performance is still there.
I have the following setup:
Azure Virtual Machine D3 (4 cores 14GB memory). This is a 500USD per month machine... it should be pretty fast, right??
20 SQL Azure databases at Standard S3 Level. I'm happy with these ones delivering 90ms on 900 users... Actually I think they can support a lot more users than 900 at the 90ms response times avg but
WebAPI from .net 4.5
I'm very happy that through sharding I have been able to improve the performance of my database searches but now the network speed of the virtual machine I'm using is degrading my overall app performance to a point that it makes it a showstopper.
Any pointers on what I might be missing would be extremely appreciated.
Cheers,
Hector