I am based in the UK and have two webservers, one German based (1&1) and the other is UK based (Easyspace).
I recently signed up to the UK easyspace server because it was about the same price I paid for my 1&1 server but also I wanted to see if my sites hosted on a UK server gave better results in terms of UK based traffic.
Its seems my traffic is roughly the same for both servers... however 1&1 server performance and customer service is much better than Easyspace so I was thinking about cancelling it and getting another 1&1 server.
I understand about latency issues where USA/Asia would be much slower for UK traffic but I am just wondering what your thoughts are traffic, SEO etc and if you think I should stick with a UK server or if it doesn't matter?
Looking forward to your replies.
I have never heard of common search engines ranking sites by their response time as it is highly variable due to the nature of the internet.
If a search engine would penalize you for the subnet you are on then you likely have bigger problems.
I get better results on google.com.au for my sites than on other flavours of google, even though the sites are not hosting in Australia. So I would suggest that the actual physical location of the servers won't matter so much and if you are wanting to be higher up on google.co.uk you might want a co.uk domain?
Google associates a region with your site mostly through its suffix (TLD/SLD, eg. .co.uk), but if you create a Google Webmaster Tools account you can tell it otherwise in the odd case it makes a mistake.
As far as the traffic is concerned the site will be loaded fast for UK visitors. I suggest using this server if most of your visitors are from UK. Server location does not have to do anything with SEO.
Stick with your UK server if you think its better.
My main concern is losing UK based customers if the server is located outside the UK but it appears from the comments that this is probably not the case.
However, my UK server is based in Scotland, my other server is based in Germany and is actually closer to London than Scotland?
Just to compare the speed between Scotland server and Germany server:
=== Germany Based ===
Pinging firststopdigital.com [87.106.101.189]:
Ping #1: Got reply from 87.106.101.189 in 126ms [TTL=46]
Ping #2: Got reply from 87.106.101.189 in 126ms [TTL=46]
Ping #3: Got reply from 87.106.101.189 in 126ms [TTL=46]
Ping #4: Got reply from 87.106.101.189 in 126ms [TTL=46]
Variation: 0.4ms (+/- 0%)
Shortest Time: 126ms
Average: 126ms
Longest Time: 126ms
=== UK Based ===
Pinging pb-net.co.uk [62.233.81.163]:
Ping #1: Got reply from 62.233.81.163 in 120ms [TTL=55]
Ping #2: Got reply from 62.233.81.163 in 119ms [TTL=55]
Ping #3: Got reply from 62.233.81.163 in 119ms [TTL=55]
Ping #4: Got reply from 62.233.81.163 in 119ms [TTL=55]
Variation: 0.3ms (+/- 0%)
Shortest Time: 119ms
Average: 119ms
Longest Time: 120ms
The difference is around 6ms which is not much at all.
Incidentally I just performed a ping to a USA based domain I own:
Pinging pbnetltd.com [74.86.61.36]:
Ping #1: Got reply from 74.86.61.36 in 6.4ms [TTL=121]
Ping #2: Got reply from 74.86.61.36 in 6.3ms [TTL=121]
Ping #3: Got reply from 74.86.61.36 in 6.2ms [TTL=121]
Ping #4: Got reply from 74.86.61.36 in 6.3ms [TTL=121]
Variation: 0.2ms (+/- 3%)
Shortest Time: 6.2ms
Average: 6.3ms
Longest Time: 6.4ms
The USA timings are much quicker considering the extra distance across the Atlantic to NY and back (9am UK time so USA are asleep - will try again tonight).
Related
I created a test in JMETER
Add > Sampler > HTTP Request = Get
Server Name = dainikbhaskar.com
No. of threads(users) = 1
Ramp-up period (seconds) = 1,
Loop Count = 1
(My internet connection is a broadband one with the speed 50 MBPS)
I ran the test, ran successful, latency comes as 127 & sometimes less than 100 in subsequent executions.
I switched off my Wi-Fi, connected my laptop with mobile hotspot & executed the same test.
Now the latency is 607, 932, 373, 542, 915
I believe it's happening due to INTERNET CONNECTION SPEED as rest of the inputs are same.
Please confirm whether my perception is correct ? :)
It is correct.
You can also get network latency from https://www.speedtest.net/ or https://fast.com/
Latency is the time from sending the request until first byte of response arrives, so called "Time to first byte"
In JMeter's world:
Latency is Connect Time + Time to send the request + time to get the first byte of response
Elapsed time is Latency + time to get the last byte of the response.
More information:
JMeter Glossary
Understanding Your Reports: Part 1 - What are KPIs?
If you get 5x times higher latency for other connection it means that the majority of time is spend for the packets to travel back and forth. You can see the more precise picture by looking at Network tab of your browser developer tools or using a special solution like Lighthouse
I'm trying to determine if there is any difference in time between two time servers in Windows. For example, I have time.windows.com and time.nist.gov. Is there a simple way to compare the time difference?
From windows you can compare both time sources to your own machine and approximate the difference.
w32tm /monitor /computers:time.windows.com,time.nist.gov
time.windows.com[52.179.17.38:123]:
ICMP: error IP_REQ_TIMED_OUT - no response in 1000ms
NTP: -0.0528936s offset from local clock
RefID: utcnist2.colorado.edu [128.138.141.172]
Stratum: 2
time.nist.gov[132.163.97.1:123]:
ICMP: error IP_REQ_TIMED_OUT - no response in 1000ms
NTP: -0.0476330s offset from local clock
RefID: 'NIST' [0x5453494E]
Stratum: 1
Warning:
Reverse name resolution is best effort. It may not be
correct since RefID field in time packets differs across
NTP implementations and may not be using IP addresses.
Good Luck!
Shane
I am building an autocomplete functionality and realized the amount of time taken between the client and server is too high (in the range of 450-700ms)
My first stop was to check if this is result of server delay.
But as you can see these Nginx logs are almost always 0.001 milliseconds (request time is the last column). It’s hardly a cause of concern.
So it became very evident that I am losing time between the server and the client. My benchmarks are Google Instant's response times. Which almost often is in the range of 30-40 milliseconds. Magnitudes lower.
Although it’s easy to say that Google's has massive infrastructural capabilities to deliver at this speed, I wanted to push myself to learn if this is possible for someone who is not that level. If not 60 milliseconds, I want to shave off 100-150 milliseconds.
Here are some of the strategies I’ve managed to learn.
Enable httpd slowstart and initcwnd
Ensure SPDY if you are on https
Ensure results are http compressed
Etc.
What are the other things I can do here?
e.g
Does have a persistent connection help?
Should I reduce the response size dramatically?
Edit:
Here are the ping and traceroute numbers. The site is served via cloudflare from a Fremont Linode machine.
mymachine-Mac:c name$ ping site.com
PING site.com (160.158.244.92): 56 data bytes
64 bytes from 160.158.244.92: icmp_seq=0 ttl=58 time=95.557 ms
64 bytes from 160.158.244.92: icmp_seq=1 ttl=58 time=103.569 ms
64 bytes from 160.158.244.92: icmp_seq=2 ttl=58 time=95.679 ms
^C
--- site.com ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 95.557/98.268/103.569/3.748 ms
mymachine-Mac:c name$ traceroute site.com
traceroute: Warning: site.com has multiple addresses; using 160.158.244.92
traceroute to site.com (160.158.244.92), 64 hops max, 52 byte packets
1 192.168.1.1 (192.168.1.1) 2.393 ms 1.159 ms 1.042 ms
2 172.16.70.1 (172.16.70.1) 22.796 ms 64.531 ms 26.093 ms
3 abts-kk-static-ilp-241.11.181.122.airtel.in (122.181.11.241) 28.483 ms 21.450 ms 25.255 ms
4 aes-static-005.99.22.125.airtel.in (125.22.99.5) 30.558 ms 30.448 ms 40.344 ms
5 182.79.245.62 (182.79.245.62) 75.568 ms 101.446 ms 68.659 ms
6 13335.sgw.equinix.com (202.79.197.132) 84.201 ms 65.092 ms 56.111 ms
7 160.158.244.92 (160.158.244.92) 66.352 ms 69.912 ms 81.458 ms
mymachine-Mac:c name$ site.com (160.158.244.92): 56 data bytes
I may well be wrong, but personally I smell a rat. Your times aren't justified by your setup; I believe that your requests ought to run much faster.
If at all possible, generate a short query using curl and intercept it with tcpdump on both the client and the server.
It could be a bandwidth/concurrency problem on the hosting. Check out its diagnostic panel, or try estimating the traffic.
You can try and save a response query into a static file, then requesting that file (taking care as not to trigger the local browser cache...), to see whether the problem might be in processing the data (either server or client side).
Does this slowness affect every request, or only the autocomplete ones? If the latter, and no matter what nginx says, it might be some inefficiency/delay in recovering or formatting the autocompletion data for output.
Also, you can try and serve a static response bypassing nginx altogether, in case this is an issue with nginx (and for that matter: have you checked out nginx' error log?).
One approach I didn't see you mention is to use SSL sessions: you can add the following into your nginx conf to make sure that an SSL handshake (very expensive process) does not happen with every connection request:
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
See "HTTPS server optimizations" here:
http://nginx.org/en/docs/http/configuring_https_servers.html
I would recommend using New Relic if you aren't already. It is possible that the server-side code you have could be the issue. If you think that might be the issue, there are quite a few free code profiling tools.
You may want to consider an option to preload autocomplete options in the background while the page is rendered and then save a trie or whatever structure you use on the client in the local storage. When the user starts typing in the autocomplete field you would not need to send any requests to the server but instead query local storage.
Web SQL Database and IndexedDB introduce databases to the clientside.
Instead of the common pattern of posting data to the server via
XMLHttpRequest or form submission, you can leverage these clientside
databases. Decreasing HTTP requests is a primary target of all
performance engineers, so using these as a datastore can save many
trips via XHR or form posts back to the server. localStorage and
sessionStorage could be used in some cases, like capturing form
submission progress, and have seen to be noticeably faster than the
client-side database APIs.
For example, if you have a data grid component or an inbox with
hundreds of messages, storing the data locally in a database will save
you HTTP roundtrips when the user wishes to search, filter, or sort. A
list of friends or a text input autocomplete could be filtered on each
keystroke, making for a much more responsive user experience.
http://www.html5rocks.com/en/tutorials/speed/quick/#toc-databases
I am learning Lift framework. I used project template from git://github.com/lift/lift_25_sbt.git and started server with container:start sbt command.
This template application displays just simple menu. If i use ab from apache to measure performance, its pretty bad. I am missing something fundamental to improve performance?
C:\Program Files\Apache Software Foundation\httpd-2.0.64\Apache2\bin>ab -n 30 -c
10 http://127.0.0.1:8080/
Benchmarking 127.0.0.1 (be patient).....done
Server Software: Jetty(8.1.7.v20120910)
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /
Document Length: 2877 bytes
Concurrency Level: 10
Time taken for tests: 8.15625 seconds
Complete requests: 30
Failed requests: 0
Write errors: 0
Total transferred: 96275 bytes
HTML transferred: 86310 bytes
Requests per second: 3.74 [#/sec] (mean)
Time per request: 2671.875 [ms] (mean)
Time per request: 267.188 [ms] (mean, across all concurrent requests)
Transfer rate: 11.73 [Kbytes/sec] received
Are you running it in production mode? I found i had like 30 rps in devel, but over 250 in production mode. ( https://www.assembla.com/spaces/liftweb/wiki/Run_Modes )
as mentioned earlier, you should run Lift in production mode. This is the main key to get good performance. All templates are cached this way, and other optimizations apply.
if you want to measure something not abstract and theoretical, then you should give the JVM time to "warm up", apply it's JIT optimizations. So, you should apply ~thousand requests first and totally ignore them (must be a couple of seconds). After that, measure the real performance of an already-started server
there are some slight JVM optimizations, altrough they seem more like a hack to me, and give a boost not more than around 20%
other small hacks include serving static content with nginx, starting the application in a dedicated server instead of Simple Build Tool and such.
I have a weird problem that I hope you can help me out with.
On our development server we are running Windows 2008R2 with IIS 7.5 on a virtual x64 instance with 8GB RAM.
Here I call a WCF method that uses ThreadPool.QueueUserWorkItem to process a large amount of hierarchical data. This works fine, and work rather fast (a 270 MB XML is read an processed producing 190.035 records within 379 seconds). The client is done calling the method about 250 seconds in.
Now the same "workflow" on Windows Azure is a whole other case. Although similar (Large instance in Round Robin configuration), Windows Azure stop within seconds the client disconnect. This means that only 160.055 records is written and far slower - 917 seconds. The problem here is, that I miss around 30.000 records which should now be queued on the two Azure instances, but it seems like - on client disconnect - to abandon the remaining work.
The client uses HttpWebRequest for communication and both solutions run .NET 4.0.
What is that I am missing out on here?
Thanks in advance for any help regarding this issue.
My bad guys - I simply could not in my wildest imagination foresee that Windows Azure would be so slow .. so by increasing the HttpWebRequest from 2 minutes to 30 minutes I was able to achieve the same data volume as in our development environment.
So - I will not delete the question - but let this stand as a reference for you soon to come Azure guys.
I am positive that Azure (and other cloud providers) is the future, but from Denmark to "North Europe" the latency is high - and SQL Azure has yet to prove it can perform when talking OLTP and normalized databases.
DEVELOPMENT (VIRTUAL ENVIRONMENT)
190.335 records from a 299 MB file took 379 seconds on a single instance
WINDOWS AZURE (NORTH EUROPE)
190.335 records from a 299 MB file took 1.400 seconds on two LARGE instances
The good news is, that WCF and ThreadPool work flawlessly and no special considerations (except a high timeout) is necessary.
Just to clarify, the 299MB file is split up in multiple REST calls to the server, in a format similiar to this one:
<?xml version="1.0" encoding="UTF-8"?>
<HttpPost absolutePath="A/B/C/D/E/OO">
<Parameters xmlns="http://somenamespace">
<A>Package</A>
<B>100</B>
<C>Generic</C>
<D>ReceiverParty</D>
<E>
<F xmlns="http://somenamespace">
<G xmlns="http://somenamespace/Product">Long Text</G>
<H xmlns="http://somenamespace/Product">1</H>
<I xmlns="http://somenamespace/Product">PK</I>
<J xmlns="http://somenamespace/Product">5995</J>
<K xmlns="http://somenamespace/Product">
<L xmlns="http://somenamespace/P/Q">Discount</L>
<M xmlns="http://somenamespace/P/Q">1000</M>
<N xmlns="http://somenamespace/P/Q">6995</N>
</K>
</F>
</E>
<OO>
<O>
<A>Product</A>
<B>100</B>
<C>Generic</C>
<D>ReceiverParty</D>
<E>
<F xmlns="http://somenamespace">
<G xmlns="http://somenamespace/Product">Long Text</G>
<H xmlns="http://somenamespace/Product">1</H>
<I xmlns="http://somenamespace/Product">PK</I>
<J xmlns="http://somenamespace/Product">5995</J>
<K xmlns="http://somenamespace/Product">
<L xmlns="http://somenamespace/P/Q">Discount</L>
<M xmlns="http://somenamespace/P/Q">1000</M>
<N xmlns="http://somenamespace/P/Q">6995</N>
</K>
</F>
</E>
</O>
</OO>
</Parameters>
</HttpPost>