I'm working on an application that connects to URLs, and it responds differently depending on whether or not an address resolves in DNS. I need to find a way to simulate DNS Hijacking so that I can test that my application handles it correctly.
Anybody know a way to do that?
Set up a DNS Server on a second pc and use this as your referenced DNS. Then you can shutdown it for sometime or modify the answers to try your handling behavior. If you dont have a second machine you can also set it up in a virtual machine.
Just add the "hijacked" hosts into your hosts file. In Linux, this should be in /etc/hosts; in Windows, %SYSTEMDIR%\drivers\etc\hosts..
The entries are in the format ip.addr.ess.here hostname1 hostname2 (there should already be entries for localhost, so add others to match your taste)
When you're done, remove (or comment out) from the hosts file again.
Related
Thanks in advance.
I have setup a server in which I intend to host a couple of applications but something weird is happening, I have bought a couple of domains which I have all mapped to the same ip address, but now when trying to ssh to that server only one of the domains goes through, the rest don't.
Someone please explain why this is happening and what I could have probably done wrong
Am I correct that you are attempting to connect to the different domains via different saved configurations in your SSH terminal app. If so, check that you have connected each of those configurations to the appropriate private key. And of course check any other settings you may have needed in those configurations.
For example, make sure that if the host name for the working one is <username>#<domainOne> then the others are not simply <domainTwo.com>. (Errors like this can be hidden in some SSH terminals if the domain is very long.)
If I use foo.my-company.com at work, then all works well, but if we fix any bug remotely, then the web server will respond with a forbidden.
I heard we can use dev1-foo.my-company.com at a remote location, and it is the same site, but if I access that, I found that all the AJAX are still done to foo.my-company.com and it won't work because it is still forbidden. It seems that "hosts file" can be used to overcome this, but how specifically?
All a hosts file can do is associate a hostname (like dev1-foo.my-company.com) to an IP address (like 10.1.1.5).
This can be enormously useful if:
Your DNS doesn't have an entry for the host you need (e.g. "dev1")
... or ..
You want to override DNS (substitute your own "dev1", e.g. for testing)
This is all TCP/IP - it has nothing directly to do with higher-level protocols like HTTP or AJAX.
'Hope that helps ..
I've got XAMPP setup on my laptop (OSX 10.6) for dev, and I wanted to use VirtualDocumentRoot so that I could do *.localhost and it would automap to the folder under my sites directory. I've got this all set up fine, and it works great, but when I got to work today, I found an issue with the way our LAN handles DNS.
Long story short, instead of checking the LAN DNS server for local domains, it goes out to the root. Is there a way to get bind to check the DHCP supplied DNS server for addresses it's not responsible for? Or alternatively, is there a way to get my os to use the DHCP DNS server first, and then fall back to the local with minimal performance hit?
Thanks!
I'm using Linux Arch, but as MacOSX is based on some *nix system - may this ideas helps you:
Take a look at the file /etc/resolv.conf. In my setup this file is automatically generated by NetworkManager.
This document writes about ways to update /etc/resolv.conf when dhcpcd, NetworkManager or dhclient is used: https://wiki.archlinux.org/index.php/Dnsmasq#DHCP_Setup
In this way you do just prepend the local dns before the dhcp's dns (or static if you're switching to a static configuration). Make sure you remove all forwarders from your dns-server.
If macos does not use them, may this workaround gives you a hint, even if it's very limited:
Add a global name-server (like google's one 8.8.8.8) to your dns-server's list of forwarders.
Almost every ajax call I make, results in an expensive DNS look up. Are there headers I can set, that will prevent the browser from making DNS look ups. Or perhaps some server side settings?
How do you know that this is causing a performance problem? Did you use Wireshark to verify? I very much doubt that DNS lookups are to blame.
Using the IP Address should prevent DNS Lookup :)
You could add the hostname/IP mapping in question to your hosts file (on the computer where the browser is running).
The exact location depends on the operating system. On a 32bit Windows system this is %windir%\system32\drivers\hosts, on Unix like systems this should be /etc/hosts
As far as I know, no DNS lookup will happen for entries in the hosts file (at least not on Windows)
One of my sites - mediadeals.co.uk is showing a blank page.
So I went back to my developer. He asked me to add this on my hosts file
in windows->system32->drivers->etc->hosts
74.86.205.232 mediadeals.co.uk
After doing this the site started working. What does this mean?
Thats crazy. All he did was make it work on YOUR machine. The hosts file simply maps names to IP addresses. Its like a local DNS. What needs to happen for the outside world to see this the DNS servers that are authoritative for mediadeals.co.uk need to have an A record pointing to 74.86.205.232.
How long ago did you register that site name? Don't forget that DNS entries may take a while to propagate across the web. 24 hrs+ sometimes.
And btw, that "fix" will ONLY work on your machine. It maps the friendly URL to an IP address for you, not for the world.
The reason its not working is there is no DNS record for it.
The hosts file is allowing you to point via a local DNS replacement.
All you need is to get the site hosted somewhere and a DNS entry setup.
If you like the site and he is willing to host for $150 then go for it, depending on your contract, if he should have done in the initial budget then you should question this.
RE