how can I use dig to initiate DNS cache poisoning so that all queries sent to the server will return my IP address instead? I tried researching this but I am yet to find the ideal answer.
With a DNS cache poisoning attack, an attacker can make the DNS server return wrong results. Typically, this is done by requesting a domain under the attacker's control. While it is possible to poison the entries for multiple victim domains, the attack is usually performed against one or a couple of victim domains. It is not possible to poison all domains with this attack.
The DNS server forwards this request to the nameserver of the attacker, and may initiate multiple requests for various subdomains. There are 2 typical attack variants:
The response of the attacker's nameserver includes additional records for the victim's domain.
The attacker tries to find out the UDP port assignment scheme, in order to guess how a valid reply would look. The attacker then requests the victim's domain and immediately sends a fake response, hoping for their response to beat the victim's.
dig is a client-side tool you can use to initiate the request. In order to perform a DNS poisoning attack, you need to have a nameserver at your domain under your control (or use a suitably-configured one set up by somebody else).
For variant 1, the request must be for the attacker's domain name and go to the target DNS server.
For variant 2, after an initial request to determine the server's UDP port assignment scheme, you can use dig to send a request for the victim domain.
In any case, after the attack, you can use dig to confirm that the attack was successful, simply by requesting the victim domain and record type you poisoned and seeing whether this is the fake or real value.
To find about about how to run dig, refer to its manpage.
Related
I am looking for a specific kind of proxy that is meant to operate in a rendezvous mode, such that two nodes can make an outgoing connection to the same proxy, send a routing token, and have their packets relayed to each other from that point.
Proxy servers like HAProxy would be perfect but AFAIK they do not offer something like that: the goal of the proxy in this case is to make another outgoing connection and route the packets to that location. In this case, I want two nodes to connect to the proxy, and have their packets relayed between them through the proxy, after sending a routing token that can be used to associate the two nodes.
I could write my own server to perform such type of relaying, but I am wondering if something already exists to do something like this. I am looking for such a solution as a fallback for cases where NAT traversal protocols like ICE/STUN/TURN are not feasible due to a highly restricted network environment that does not allow UDP traffic. The base protocol for the proxy could be TCP, HTTP or WebSocket, which would be easier to allow in a firewall with a simple rule.
Any ideas or recommendations?
I believes SOCKSv5 has everything you are asking for.
two nodes can make an outgoing connection to the same proxy. send a routing token, and have their packets relayed to each other from that point.
The routing token in this case would be the endpoint address and/or the user credentials. I would first look at the supper simple implementation built into the 'ssh' utility, this guide goes over how to get everything set up. If you need something more granular then look into dante.
The only tricky part is when you try to use the user credential option with SOCKv5 as it is not as well supported in browsers, but is possible with addons.
I write a HTTP small server under Windows. Access to the server is secured with the usual HTTP auth mechanisms (I use Windows HTTP API). But I want to have no auth for localhost, i.e. local users should be able to access the server without password.
The question is: is that save? More precisely, is it safe to trust the remote address of a TCP connection without further auth?
Assume for a moment that an adversary (Charly) is trying to send a single malicious HTTP GET to my server. Furthermore, assume that all Windows/router firewalls ingress checks for localhost addresses let source addresses of 127.0.0.1 and [::1] pass.
So the remote address could be spoofed, but for a TCP connection we need a full three-way handshake. Thus, a SYN-ACK is sent by Windows upon reception of the SYN. This SYN-ACK goes nowhere, but Charly might just send an ACK shortly afterwards. This ACK would be accepted if the ack'ed SEQ of the SYN-ACK was correct. Afterwards, Charly can send the malicious payload since he knows the correct TCP SEQ and ACK numbers.
So all security hinges on the unpredicability of Windows' TCP outgoing initial sequence number (ISN). I'm not sure how secure that is, how hard it is to predict next session's ISN.
Any insight is appreciated.
In the scenario you are describing an attacker wouldn't get any packets from your web server. If you can use something like digest auth (where a server sends to a client a short random nonce string first and then clients uses that nonce to create an authentication hash) you'd be fine.
If installing a firewall on a system is an option, you could use a simple rule like "don't accept packets with source ip 127.0.0.1 from any interface other then loopback".
We often find columns like Address, Port in web browser proxy settings. I know when we use proxy to visit a page, the web browser request the web page from the proxy server, but what I want to know is how the whole mechanism works? I have observed that many ISP allow only access to a single IP(of their website) after we exhausted our free data usage. But when we enter the site which we wants to browse in proxy URL and then type in the allowed IP, the site get loaded. How this works?
In general, your browser simply connects to the proxy address & port instead of whatever IP address the DNS name resolved to. It then makes the web request as per normal.
The web proxy reads the headers, uses the "Host" header of HTTP/1.1 to determine where the request is supposed to go, and then makes that request itself relaying all remaining data in both directions.
Proxies will typically also do caching so if another person requests the same page from that proxy, it can just return the previous result. (This is simplified -- caching is a complex topic.)
Since the proxy is in complete control of the connection, it can choose to route the request elsewhere, scrape request and reply data, inject other things (like ads), or block you altogether. Use SSL to protect against this.
Some web proxies are "transparent". They reside on a gateway through which all IP traffic must pass and use the machine's networking stack to redirect outgoing connections to port 80 to a local port instead. It then behaves the same as though a proxy was defined in the browser.
Other proxies, like SOCKS, have a dedicated protocol that allows non-HTTP requests to be made as well.
There are 2 types of HTTP proxies, there are the ones that are reversed and the ones that
are forward.
The web browser uses a forward proxy, basically it is sending all http traffic through the proxy, the proxy will take this traffic out to the internet. Every http packet that comes out from your computer, will be send to the proxy before going to the target site.
The ISP blocking does not work when using a proxy because, every packet that comes out from your machine is pointing to the proxy and not to the targe site. The proxy could be getting internet through another ISP that has no blocks whatsoever.
I am writing a little app similar to omegle. I have a http server written in Java and a client which is a html document. The main way of communication is by http requests (long polling).
I've implemented some sort of security by using the https protocol and I have a securityid for every client that connects to the server. When the client connects, the server gives it a securityid which the client must always send back when it wants a request.
I am afraid of the man in the middle attack here, do you have any suggestions how I could protect the app from such an attack.
Note that this app is build for theoretical purposes, it won't be ever used for practical reasons so your solutions don't have to be necessarily practical.
HTTPS does not only do encryption, but also authentication of the server. When a client connects, the server shows it has a valid and trustable certificate for its domain. This certificate can not simply be spoofed or replayed by a man-in-the-middle.
Simply enabling HTTPS is not good enough because the web brings too many complications.
For one thing, make sure you set the secure flag on the cookies, or else they can be stolen.
It's also a good idea to ensure users only access the site via typing https://<yourdomain> in the address bar, this is the only way to ensure an HTTPS session is made with a valid certificate. When you type https://<yourdomain>, the browser will refuse to let you on the site unless the server provides a valid certificate for <yourdomain>.
If you just type <yourdomain> without https:// in front, the browser wont care what happens. This has two implications I can think of off the top of my head:
The attacker redirects to some unicode domain with a similar name (ie: looks the same but has a different binary string and is thus a different domain) and then the attacker provides a valid certificate for that domain (since he owns it), the user probably wouldn't notice this...
The attacker could emulate the server but without HTTPS, he would make his own secured connection to the real server and become a cleartext proxy between you and the server, he can now capture all your traffic and do anything he wants because he owns your session.
When browsing through the internet for the last few years, I'm seeing more and more pages getting rid of the 'www' subdomain.
Are there any good reasons to use or not to use the 'www' subdomain?
There are a ton of good reasons to include it, the best of which is here:
Yahoo Performance Best Practices
Due to the dot rule with cookies, if you don't have the 'www.' then you can't set two-dot cookies or cross-subdomain cookies a la *.example.com. There are two pertinent impacts.
First it means that any user you're giving cookies to will send those cookies back with requests that match the domain. So even if you have a subdomain, images.example.com, the example.com cookie will always be sent with requests to that domain. This creates overhead that wouldn't exist if you had made www.example.com the authoritative name. Of course you can use a CDN, but that depends on your resources.
Also, you then don't have the ability to set a cross-subdomain cookie. This seems evident, but this means allowing authenticated users to move between your subdomains is more of a technical challenge.
So ask yourself some questions. Do I set cookies? Do I care about potentially needless bandwidth expenditure? Will authenticated users be crossing subdomains? If you're really concerned with inconveniencing the user, you can always configure your server to take care of the www/no www thing automatically.
See dropwww and yes-www (saved).
Just after asking this question I came over the no-www page which says:
...Succinctly, use of the www subdomain
is redundant and time consuming to
communicate. The internet, media, and
society are all better off without it.
Take it from a domainer, Use both the www.domainname.com and the normal domainname.com
otherwise you are just throwing your traffic away to the browers search engine (DNS Error)
Actually it is amazing how many domains out there, especially amongst the top 100, correctly resolve for www.domainname.com but not domainname.com
There are MANY reasons to use the www sub-domain!
When writing a URL, it's easier to handwrite and type "www.stackoverflow.com", rather than "http://stackoverflow.com". Most text editors, email clients, word processors and WYSIWYG controls will automatically recognise both of the above and create hyperlinks. Typing just "stackoverflow.com" will not result in a hyperlink, after all it's just a domain name.. Who says there's a web service there? Who says the reference to that domain is a reference to its web service?
What would you rather write/type/say.. "www." (4 chars) or "http://" (7 chars) ??
"www." is an established shorthand way of unambiguously communicating the fact that the subject is a web address, not a URL for another network service.
When verbally communicating a web address, it should be clear from the context that it's a web address so saying "www" is redundant. Servers should be configured to return HTTP 301 (Moved Permanently) responses forwarding all requests for #.stackoverflow.com (the root of the domain) to the www subdomain.
In my experience, people who think WWW should be omitted tend to be people who don't understand the difference between the web and the internet and use the terms interchangeably, like they're synonymous. The web is just one of many network services.
If you want to get rid of www, why not change the your HTTP server to use a different port as well, TCP port 80 is sooo yesterday.. Let's change that to port 1234, YAY now people have to say and type "http://stackoverflow.com:1234" (eightch tee tee pee colon slash slash stack overflow dot com colon one two three four) but at least we don't have to say "www" eh?
There are several reasons, here are some:
1) The person wanted it this way on purpose
People use DNS for many things, not only the web. They may need the main dns name for some other service that is more important to them.
2) Misconfigured dns servers
If someone does a lookup of www to your dns server, your DNS server would need to resolve it.
3) Misconfigured web servers
A web server can host many different web sites. It distinguishes which site you want via the Host header. You need to specify which host names you want to be used for your website.
4) Website optimization
It is better to not handle both, but to forward one with a moved permanently http status code. That way the 2 addresses won't compete for inbound link ranks.
5) Cookies
To avoid problems with cookies not being sent back by the browser. This can also be solved with the moved permanently http status code.
6) Client side browser caching
Web browsers may not cache an image if you make a request to www and another without. This can also be solved with the moved permanently http status code.
There is no huge advantage to including-it or not-including-it and no one objectively-best strategy. “no-www.org” is a silly load of old dogma trying to present itself as definitive fact.
If the “big organisation that has many different services and doesn't want to have to dedicate the bare domain name to being a web server” scenario doesn't apply to you (and in reality it rarely does), which address you choose is a largely cultural matter. Are people where you are used to seeing a bare “example.org” domain written on advertising materials, would they immediately recognise it as a web address without the extra ‘www’ or ‘http://’? In Japan, for example, you would get funny looks for choosing the non-www version.
Whichever you choose, though, be consistent. Make both www and non-www versions accessible, but make one of them definitive, always link to that version, and make the other redirect to it (permanently, status code 301). Having both hostnames respond directly is bad for SEO, and serving any old hostname that resolves to your server leaves you open to DNS rebinding attacks.
Apart from the load optimization regarding cookies, there is also a DNS related reason for using the www subdomain. You can't use CNAME to the naked domain. On yes-www.org (saved) it says:
When using a provider such as Heroku or Akamai to host your web site, the provider wants to be able to update DNS records in case it needs to redirect traffic from a failing server to a healthy server. This is set up using DNS CNAME records, and the naked domain cannot have a CNAME record. This is only an issue if your site gets large enough to require highly redundant hosting with such a service.
As jdangel points out the www is good practice in some cookie situations but I believe there is another reason to use www.
Isn't it our responsibility to care for and protect our users. As most people expect www, you will give them a less than perfect experience by not programming for it.
To me it seems a little arrogant, to not set up a DNS entry just because in theory it's not required. There is no overhead in carrying the DNS entry and through redirects etc they can be redirected to a non www dns address.
Seriously don't loose valuable traffic by leaving your potential visitor with an unnecessary "site not found" error.
Additionally in a windows only network you might be able to set up a windows DNS server to avoid the following problem, but I don't think you can in a mixed environment of mac and windows. If a mac does a DNS query against a windows DNS mydomain.com will return all the available name servers not the webserver. So if in your browser you type mydomain.com you will have your browser query a name server not a webserver, in this case you need a subdomain (eg www.mydomain.com ) to point to the specific webserver.
Some sites require it because the service is configured on that particular set up to deliver web content via the www sub-domain only.
This is correct as www is the conventional sub-domain for "World Wide Web" traffic.
Just as port 80 is the standard port. Obviously there are other standard services and ports as well (http tcp/ip on port 80 is nothing special!)
Imagine mycompany...
mx1.mycompany.com 25 smtp, etc
ftp.mycompany.com 21 ftp
www.mycompany.com 80 http
Sites that don't require it basically have forwarding in dns or redirection of some-kind.
e.g.
*.mycompany.com 80 http
The onlty reason to do it as far as I can see is if you prefer it and you want to.