I haven't found this any where in heroku's documentation or on google. Typically this is done in the host file. Does anyone have any idea how to block an ip on heroku?
Heroku doesn't have a firewall that you can use to block IPs, so you would have to either block it at the application level, or you would have to put a proxy of some sort in front of your application that you can use to block the IP. One common one people use is CloudFlare, which has support for blocking individual IPs, rate limiting IPs, etc.
If you want to stick with blocking at the application level, and are using Ruby on Rails, this question and answer which might give you what you're looking for: How can you block or filter IP addresses on Heroku?
Since this was first answered, Heroku has added a Web Application Firewall in their AddOn Marketplace: https://elements.heroku.com/addons/expeditedwaf
It can block inbound requests by IP, UserAgent, Referrer, Country, etc.
Related
I have created a listener on the LB (load balancer) with rules so that requests to different subdomains. I have set a CNAME for each subdomain in Cloudflare.
The problem is when I try to use Proxy feature, when I turn it off, my page works without a problem, but when i turn it on, it results in a time out.
There is a way to use Proxy feature with an LB?
The very simple answer is yes, this is very possible to do.
The longer answer is that a timeout indicates that some change in the request by Cloudflare creates a timeout for your AWS servers. Just a couple of examples could be:
A firewall rule that times out Cloudflare's IPs
Servers are not respecting X-Forwarded-For header and all requests from a small group of IPs are messing with application logic
It would help to know if the requests are reaching your load balancer and furthermore if the servers behind the LB are receiving the requests.
Can I count on the external IP address being stable for any given Dyno instance.
That is, my Dyno boots up and makes a request to some external service. That service notes the incoming IP address. Could that service assume that any subsequent traffic from that same Dyno instance would come from the same IP address. Will the same apply if the same Dyno makes a request to a very different endpoint?
I understand that Heroku makes no guarantees about Dyno addressing unless you upgrade to the Private level offering (or otherwise spend more on addons or Enterprise features). I'm not looking to know in advance which IPs to expect, just whether it's stable.
I assume the architecture is fairly obvious: containers running on VMs that have outbound network access using the VM's interface, so external IPs for outbound connections are going to be the VM IP address. However, Heroku emphasizes it's Routing layer and makes it sound complicated, so you never know if they have some kind of outbound routing complexity as well, which is what I'm worried about.
On a broad level, you should never expect IP addresses to be stable on Heroku by default. This applies to DNS targets, hence the requirements for CNAMEs everywhere, and outbound IPs.
Regarding the specific question, yes, a single specific Dyno instance will have the same outbound IP address, but that means it will only be stable for ~24 hours (+3 1/2 hours possibly, see /Dynos#restarting) at most. After web.1's daily cycle, the newly launched web.1 will have a new public IP address. web.1, web.2, web.3, web.#…, along with any/all other process groups' Dynos will likely never have the same public IP address at the same time.
There are means for stabilizing outbound IPs longer term, as is accomplished by various Proxy partner add-ons, or any other Proxy service you choose to use.
I am working on a small project to help me understand websockets better. I am making a simple browser game that connects to an ip via a websocket. There will be 3 ip addresses however I want to assign the user an ip and not have them able to modify it so they are unable to get on the same server as friends.
I will assign the ip based on how full the games are etc and this will be down via php. Currently although it connects to this ip, the user is able to use the console in a browser to modify the ip to one of the other ones.
I was thinking of sending a check number, so the web server sends this to the user along with the ip. It also sends it to the websocket server. Then when a user connects if the check number doesn't match it rejects the connection.
I'm new to websockets so I'm not sure if this would be easy to implement, so are there any easy solutions to this?
That seems to be the duty of other element, in particular the load balancer. How are you balancing the requests across those 3 servers? Does your load balancer support sticky sessions?
If not, probably you can record to which IP address the user connected first, and they if it connects to one of the other two later, you can return a HTTP 302 (Redirect) pointing to the server you want.
Cheers.
How does Netnanny or k9 Web Protection setup web proxy without configuring the browsers?
How can it be done?
Using WinSock directly, or at the NDIS or hardware driver level, and
then filter at those levels, just like any firewalls soft does. NDIS being the easy way.
Download this ISO image: http://www.microsoft.com/downloads/en/confirmation.aspx?displaylang=en&FamilyID=36a2630f-5d56-43b5-b996-7633f2ec14ff
it has bunch of samples and tools to help you build what you want.
After you mount or burn it on CD and install it go to this folder:
c:\WinDDK\7600.16385.1\src\network\ndis\
I think what you need is a transparent proxy that support WCCP.
Take a look at squid-cache FAQ page
And the Wikipedia entry for WCCP
With that setup you just need to do some firewall configuration and all your web traffic will be handled by the transparent proxy. And no setup will be needed on your browser.
netnanny is not a proxy. It is tied to the host machine and browser (and possibly other applications as well. It then filters all incoming and outgoing "content" from the machine/application.
Essentially Netnanny is a content-control system as against destination-control system (proxy).
Easiest way to divert all traffic to a certain site to some other address is by changing hosts file on local host
You might want to have a look at the explanation here: http://www.fiddlertool.com/fiddler/help/hookup.asp
This is how Fiddler2 achieves inserting a proxy in between most apps and the internet without modifying the apps (although lots of explanation of how-to failing the default setup). This does not answer how NetNanny/K9 etc work though, as noted above they do a little more and may be a little more intrusive.
I believe you search for BrowserHelperObjects. These little gizmos capture ALL browser communication, and as such can either remote ads from the HTML (good gizmo), or redirect every second click to a spam site (bad gizmo), or just capture every URL you type and send it home like all the WebToolBars do.
What you want to do is route all outgoing http(s) requests from your lan through a reverse proxy (like squid). This is the setup for a transparent web proxy.
There are different ways to do this, although I've only ever set it up OpenBSD and Linux; and using Squid as the reverse proxy.
At a high level you have a firewall with rules to send all externally bound http traffic to a local squid server. The Squid server is configured to:
accept all http requests
forward the requests on to the real external hosts
cache the reply
forward the reply back to the requestor on the local lan
You can then add more granular rules in Squid to control access to websites, filter content, etc.
I pretty sure you can also get this functionality in different networking gear. I bet F5 has some products that do some or all of what I described, and probably Cisco as well. There is probably other proxies out there besides Squid that you can use too.
PS. I have no idea if this is how K9 Web Protection or NetNanny works.
Squid could provide an intercept proxy for HTTP and HTTPs ports, without configuring the browsers and it also supports WCCP.
When browsing through the internet for the last few years, I'm seeing more and more pages getting rid of the 'www' subdomain.
Are there any good reasons to use or not to use the 'www' subdomain?
There are a ton of good reasons to include it, the best of which is here:
Yahoo Performance Best Practices
Due to the dot rule with cookies, if you don't have the 'www.' then you can't set two-dot cookies or cross-subdomain cookies a la *.example.com. There are two pertinent impacts.
First it means that any user you're giving cookies to will send those cookies back with requests that match the domain. So even if you have a subdomain, images.example.com, the example.com cookie will always be sent with requests to that domain. This creates overhead that wouldn't exist if you had made www.example.com the authoritative name. Of course you can use a CDN, but that depends on your resources.
Also, you then don't have the ability to set a cross-subdomain cookie. This seems evident, but this means allowing authenticated users to move between your subdomains is more of a technical challenge.
So ask yourself some questions. Do I set cookies? Do I care about potentially needless bandwidth expenditure? Will authenticated users be crossing subdomains? If you're really concerned with inconveniencing the user, you can always configure your server to take care of the www/no www thing automatically.
See dropwww and yes-www (saved).
Just after asking this question I came over the no-www page which says:
...Succinctly, use of the www subdomain
is redundant and time consuming to
communicate. The internet, media, and
society are all better off without it.
Take it from a domainer, Use both the www.domainname.com and the normal domainname.com
otherwise you are just throwing your traffic away to the browers search engine (DNS Error)
Actually it is amazing how many domains out there, especially amongst the top 100, correctly resolve for www.domainname.com but not domainname.com
There are MANY reasons to use the www sub-domain!
When writing a URL, it's easier to handwrite and type "www.stackoverflow.com", rather than "http://stackoverflow.com". Most text editors, email clients, word processors and WYSIWYG controls will automatically recognise both of the above and create hyperlinks. Typing just "stackoverflow.com" will not result in a hyperlink, after all it's just a domain name.. Who says there's a web service there? Who says the reference to that domain is a reference to its web service?
What would you rather write/type/say.. "www." (4 chars) or "http://" (7 chars) ??
"www." is an established shorthand way of unambiguously communicating the fact that the subject is a web address, not a URL for another network service.
When verbally communicating a web address, it should be clear from the context that it's a web address so saying "www" is redundant. Servers should be configured to return HTTP 301 (Moved Permanently) responses forwarding all requests for #.stackoverflow.com (the root of the domain) to the www subdomain.
In my experience, people who think WWW should be omitted tend to be people who don't understand the difference between the web and the internet and use the terms interchangeably, like they're synonymous. The web is just one of many network services.
If you want to get rid of www, why not change the your HTTP server to use a different port as well, TCP port 80 is sooo yesterday.. Let's change that to port 1234, YAY now people have to say and type "http://stackoverflow.com:1234" (eightch tee tee pee colon slash slash stack overflow dot com colon one two three four) but at least we don't have to say "www" eh?
There are several reasons, here are some:
1) The person wanted it this way on purpose
People use DNS for many things, not only the web. They may need the main dns name for some other service that is more important to them.
2) Misconfigured dns servers
If someone does a lookup of www to your dns server, your DNS server would need to resolve it.
3) Misconfigured web servers
A web server can host many different web sites. It distinguishes which site you want via the Host header. You need to specify which host names you want to be used for your website.
4) Website optimization
It is better to not handle both, but to forward one with a moved permanently http status code. That way the 2 addresses won't compete for inbound link ranks.
5) Cookies
To avoid problems with cookies not being sent back by the browser. This can also be solved with the moved permanently http status code.
6) Client side browser caching
Web browsers may not cache an image if you make a request to www and another without. This can also be solved with the moved permanently http status code.
There is no huge advantage to including-it or not-including-it and no one objectively-best strategy. “no-www.org” is a silly load of old dogma trying to present itself as definitive fact.
If the “big organisation that has many different services and doesn't want to have to dedicate the bare domain name to being a web server” scenario doesn't apply to you (and in reality it rarely does), which address you choose is a largely cultural matter. Are people where you are used to seeing a bare “example.org” domain written on advertising materials, would they immediately recognise it as a web address without the extra ‘www’ or ‘http://’? In Japan, for example, you would get funny looks for choosing the non-www version.
Whichever you choose, though, be consistent. Make both www and non-www versions accessible, but make one of them definitive, always link to that version, and make the other redirect to it (permanently, status code 301). Having both hostnames respond directly is bad for SEO, and serving any old hostname that resolves to your server leaves you open to DNS rebinding attacks.
Apart from the load optimization regarding cookies, there is also a DNS related reason for using the www subdomain. You can't use CNAME to the naked domain. On yes-www.org (saved) it says:
When using a provider such as Heroku or Akamai to host your web site, the provider wants to be able to update DNS records in case it needs to redirect traffic from a failing server to a healthy server. This is set up using DNS CNAME records, and the naked domain cannot have a CNAME record. This is only an issue if your site gets large enough to require highly redundant hosting with such a service.
As jdangel points out the www is good practice in some cookie situations but I believe there is another reason to use www.
Isn't it our responsibility to care for and protect our users. As most people expect www, you will give them a less than perfect experience by not programming for it.
To me it seems a little arrogant, to not set up a DNS entry just because in theory it's not required. There is no overhead in carrying the DNS entry and through redirects etc they can be redirected to a non www dns address.
Seriously don't loose valuable traffic by leaving your potential visitor with an unnecessary "site not found" error.
Additionally in a windows only network you might be able to set up a windows DNS server to avoid the following problem, but I don't think you can in a mixed environment of mac and windows. If a mac does a DNS query against a windows DNS mydomain.com will return all the available name servers not the webserver. So if in your browser you type mydomain.com you will have your browser query a name server not a webserver, in this case you need a subdomain (eg www.mydomain.com ) to point to the specific webserver.
Some sites require it because the service is configured on that particular set up to deliver web content via the www sub-domain only.
This is correct as www is the conventional sub-domain for "World Wide Web" traffic.
Just as port 80 is the standard port. Obviously there are other standard services and ports as well (http tcp/ip on port 80 is nothing special!)
Imagine mycompany...
mx1.mycompany.com 25 smtp, etc
ftp.mycompany.com 21 ftp
www.mycompany.com 80 http
Sites that don't require it basically have forwarding in dns or redirection of some-kind.
e.g.
*.mycompany.com 80 http
The onlty reason to do it as far as I can see is if you prefer it and you want to.