How to make azure image load faster? - image

Loading time of images from the blob is very slow
something like 3-4 secs
what can i do to make it fast?

If this is slow from multiple sources, the issue is likely in the client. You can turn Azure Storage Analytics on and then compare the server time to the e2e time. Server time is the time that the request took to process on the server, e2e time is the time the request took end-to-end, including network transmission. The difference between the two tells you how much time you are spending on the network.

Related

Ajax Call reduce response time

I'm trying to reduce ajax response time of my website, deployed on Hostinger, it takes around 1.30seconds to get the response on live whereas 220ms on my local machine, is there any way to make it faster on live too? even with null data it takes that much time on live.
The speed of a AJAX-response depends on several influences:
The speed of the remote machine (=host). Most of the time the
different sites are only virtual hosts which are multiplexed by a
bigger machine. So, if you share your space with other
high-traffic-sites, yours will not respond fast. You can ask your
provider to give you a new spot on another server of his - helped me
several times.
The speed of your connection may be slow or the connection of your
provider as well. If you have access to another test-machine for
routing-measurements you can check if your connection is really slow. If is not then you may ask your provider to re-route the connection is he is able to.
Does your AJAX-callmake excessive usage of databases (e.g. mysql)?
It could be a slow database-installation (maybe too many users).
Normally you can move the database-spot yourself of contact the
provider to move your databases to another virtual machine.
You see - if it runs at your computer fast and at your provider slow then it is most of the time the problem of your provider and you should contact him.

How to monitor very slow data loading in BigQuery

I'm loading uncompressed JSON files into BigQuery in C#, using Google API method BigQueryClient.UploadJsonAsync. Uploaded files are ranging from 1MB to 400MB. I've been uploading many TB of data with no issues those last months. But it appears since two days that uploading to BigQuery has become very slow.
I was able to upload at 600MB/s, but now I'm at most at 15MB/s.
I have checked my connection and I'm still able to go over 600MB/s in connection tests like Speed Test.
Also strangely, BigQuery load throughput seems to depend on hours of day. When reaching 3PM PST my throughput is falling to near 5-10MB/s.
I have no idea how to investigate this.
Is there a way to monitor BigQuery data loading ?
It's unclear if you're measuring time from when you start sending bytes until the load job is inserted, vs the time from when you start sending until the load job is completed. The first is primarily a question of throughput at a network level , whereas the second one also included ingestion time from the BigQuery service. You can examine the load job metadata to help figure this out.
If you're trying to suss out network issues with sites like speedtest, make sure you're choosing a suitable remote node to test against; by default, they favor something with close network locality relative to the client you are testing.

Elasticsearch speed vs. Cloud (localhost to production)

I have got a single ELK stack with a single node running in a vagrant virtual box on my machine. It has 3 indexes which are 90mb, 3.6gb, and 38gb.
At the same time, I have also got a Javascript application running on the host machine, consuming data from Elasticsearch which runs no problem, speed and everything's perfect. (Locally)
The issue comes when I put my Javascript application in production, as the Elasticsearch endpoint in the application has to go from localhost:9200 to MyDomainName.com:9200. The speed of the application runs fine within the company, but when I access it from home, the speed drastically decreases and often crashes. However, when I go to Kibana from home, running query there is fine.
The company is using BT broadband and has a download speed of 60mb, and 20mb upload. Doesn't use fixed IP so have to update A record whenever IP changes manually, but I don't think is relevant to the problem.
Is the internet speed the main issue that affected the loading speed outside of the company? How do I improve this? Is cloud (CDN?) the only option that would make things run faster? If so how much would it cost to host it in the cloud assuming I would index a lot of documents in the first time, but do a daily max. 10mb indexing after?
UPDATE1: Metrics from sending a request from Home using Chrome > Network
Queued at 32.77s
Started at 32.77s
Resource Scheduling
- Queueing 0.37 ms
Connection Start
- Stalled 38.32s
- DNS Lookup 0.22ms
- Initial Connection
Request/Response
- Request sent 48 μs
- Waiting (TTFB) 436.61.ms
- Content Download 0.58 ms
UPDATE2:
The stalling period seems to been much lesser when I use a VPN?

Large time difference for data to reach localhost

In my laravel application, I am making a HTTP request to a remote server using guzzle library . However large time is required for data to reach local host.
Here is the resposne I get on browser ,
However if I run command ping server_IP I get roughly 175ms as average transmission time .
I also monitored my CPU usage after making requests in infinte loop, however I couldn't find much usage .
I also tried hosting my laravel application on nginx server, but I still observe around 1-1.1 seconds overhead .
What could be causing this delay and how can I reduce it ?
There are few potential reasons.
Laravel is not the fastest framework. There are few hundred of files which need to be loaded on every single request. If you server doesn't have a SSD drive your performance will be terrible. I'd recommend to create a RAMDISK and serve the files from there.
Network latency. Open up wireshark and look at all the requests that need to be done. All them impact negatively the performance and some of them you cannot get around (DNS,...). Try to combine the CSS and JS files so that the number of requests is minimised.
Database connection on server side takes long time to setup, retrieves lots of data,...
Bear in mind that in most situations the latency is due to IO constraints rather than CPU usage. Moreover in test/preproduction environments where the number of requests per second that the server gets rounds to 0.

Performance of NewRelic Real User Monitoring

We're been using NewRelic Real User Monitoring to track performance and activity.
We've noticed that the browser metrics are showing the majority of time is just Network times.
Even extremely small and simple server pages are showing average times of 3-5 seconds, even though they are just a few k in size and their Web application times and rendering times are mere milliseconds.
The site is hosted in the UK and when I run Chrome's Network Developer Tools I can see the page loading in around 50ms and then the hit to beacon-1.newrelic.com (in the USA) taking a further 500ms.
The majority of our clients do not have the luxury of high bandwidth or modern browsers and I believe that NewRelic itself is causing them a particularly poor user experience.
Are there any ways of making the new relic calls perform better? Can I make new relic call to a local (UK or Europe) based beacon?
I don't want to turn off new relic, but at the moment, it is causing more performance issues than it is alerting us to.
New Relic real user monitoring (RUM) does not affect the page load time for your users. The 500 ms that you are seeing refers to the amount of time it takes for the RUM data we collected from your app to reach our servers here in the U.S. The data is transferred after the pages are loaded, so it doesn't affect the page load at all for your users. This 500 ms of data travel time, therefore, is not part of any of our measurements of the networking, page rendering or DOM processing time.
New Relic calculates network time by first finding the total amount of time your application takes from request to page load, and then subtracting any application server time from that total. It is assumed that the resulting amount of time is "network" time. As such, it doesn't include the amount of time it takes to send that data to New Relic's servers. See this page for more info on how RUM works:
https://newrelic.com/docs/features/how-does-real-user-monitoring-work
If you're worried that there might be a bug or that your numbers don't look accurate, you can always file a support ticket with New Relic so we can look at your account in more detail.

Resources