Almost every ajax call I make, results in an expensive DNS look up. Are there headers I can set, that will prevent the browser from making DNS look ups. Or perhaps some server side settings?
How do you know that this is causing a performance problem? Did you use Wireshark to verify? I very much doubt that DNS lookups are to blame.
Using the IP Address should prevent DNS Lookup :)
You could add the hostname/IP mapping in question to your hosts file (on the computer where the browser is running).
The exact location depends on the operating system. On a 32bit Windows system this is %windir%\system32\drivers\hosts, on Unix like systems this should be /etc/hosts
As far as I know, no DNS lookup will happen for entries in the hosts file (at least not on Windows)
Related
I'm currently working on a client project and I get access to files via FTP. Their server is behind a firewall and they're asking me for my IP address, I'm guessing for whitelisting.
Problem is, my IP address is dynamic and it changes quite a lot throughout the day. Is there anyway around this?
Thanks in advance.
The best way to avoid paying for a static IP is to carry on using a dynamic IP address, but use a dynamic DNS provider such as No-IP which you can cause to update every time your IP address changes (routers will often do this for you automatically, or there are Windows / OSX / Linux (such as ddclient) clients). That way, you can just use something like magpie.no-ip.com instead of an IP address and it will always resolve to your public address.
You can find the whole answer here: https://superuser.com/questions/455226/can-you-configure-dynamic-to-be-static-yourself-without-changing-your-isp
I'm developing a local site in Windows 7 using WAMP, with a domain of sitename.local. I have sitename.local in my hosts file poiting to 127.0.0.1. I'm getting DNS lookup times of around 6 seconds in chrome. If I immediately reload the page it's instant, but if I wait a minute and reload it I have to wait for a further 6 seconds for the DNS lookup.
My question, why is it having to do a DNS lookup if it's a local address listed in my hosts file? And is there any way I can reduce this time?
A bit of additional information, using the chrome://net-internals I can see that the cache is only ever a minute long, and when this cache expires this is what causes the lengthy lookup time. But should this really affect anything as it shouldn't need to look anything up?
Your issue may be with the .local domain. It has special behavior. I'm not a DNS expert, but had the "pleasure" of debugging a related issue with a friend some time ago. It boiled down to trying to resolve the .local domain via multicast in the network. See a quick explanation here.
Try changing to sitename.com to force the resolution using the hosts file.
I've got XAMPP setup on my laptop (OSX 10.6) for dev, and I wanted to use VirtualDocumentRoot so that I could do *.localhost and it would automap to the folder under my sites directory. I've got this all set up fine, and it works great, but when I got to work today, I found an issue with the way our LAN handles DNS.
Long story short, instead of checking the LAN DNS server for local domains, it goes out to the root. Is there a way to get bind to check the DHCP supplied DNS server for addresses it's not responsible for? Or alternatively, is there a way to get my os to use the DHCP DNS server first, and then fall back to the local with minimal performance hit?
Thanks!
I'm using Linux Arch, but as MacOSX is based on some *nix system - may this ideas helps you:
Take a look at the file /etc/resolv.conf. In my setup this file is automatically generated by NetworkManager.
This document writes about ways to update /etc/resolv.conf when dhcpcd, NetworkManager or dhclient is used: https://wiki.archlinux.org/index.php/Dnsmasq#DHCP_Setup
In this way you do just prepend the local dns before the dhcp's dns (or static if you're switching to a static configuration). Make sure you remove all forwarders from your dns-server.
If macos does not use them, may this workaround gives you a hint, even if it's very limited:
Add a global name-server (like google's one 8.8.8.8) to your dns-server's list of forwarders.
I'm working on an application that connects to URLs, and it responds differently depending on whether or not an address resolves in DNS. I need to find a way to simulate DNS Hijacking so that I can test that my application handles it correctly.
Anybody know a way to do that?
Set up a DNS Server on a second pc and use this as your referenced DNS. Then you can shutdown it for sometime or modify the answers to try your handling behavior. If you dont have a second machine you can also set it up in a virtual machine.
Just add the "hijacked" hosts into your hosts file. In Linux, this should be in /etc/hosts; in Windows, %SYSTEMDIR%\drivers\etc\hosts..
The entries are in the format ip.addr.ess.here hostname1 hostname2 (there should already be entries for localhost, so add others to match your taste)
When you're done, remove (or comment out) from the hosts file again.
I have a strange problem, I just installed my php web site on a shared hosting, all services were working fine. But after configuring my app I just could visit my web site only once, other attempts gives:
"The server is taking too long to respond.".
But from other IP i can access, but only once, it seems all ip addressess beeing blocked after first visit(even ftp and other services get down, no access at all from the IP), can anyone help to explore this problem ? I don't think that it's my app problem, the app works fine on my local PC.
Thanks.
First thing to try would be a traceroute to determine where your traffic is being blocked.
In a windows command prompt:
tracert www.yoursharedhostingserver.com
At the moment, trying to access this address gives this:
Fatal error: Class 'mainController'
not found in
/home/myicms/public_html/core/application/crApplication.class.php
on line 181
I have tried it multiple times and it didn't block me. It might be that You have already solved this problem.
As far as I know, the behavior described by You could only be explained by a badly configured intelligent firewall. It may have been misconfigured by Your host.
If You visit a site at a certain host and suddenly You cannot access an ftp on this host, then it's either a (really bad) firewall or a (very mean) site that explicitly adds a firewall rule to ignore that address.
Some things that You might look into:
It might be something with identd too. What was the service You have configured on Your host? Was it by any chance any kind of server-controll panel (that might have an ability to controll a firewall)?
Is the blockade permanent, or does it go off after 24h, or does it only go off after rebooting the server? Does restarting some services makes the blockade go off?
Did You install any software that "protects Your server from portscanning"? It might be a bit too aggressive.
I wish You good luck in finding a source of this problem!
Chances are that if you can access it once that its actually working. The problem is more than likely in the php code than in the server.