I have some WFS and WMS layers published in geoserver and trying to access from my application. I want to ensure geoserver allows POST request only and block other like GET, PUT etc. I followed the link https://docs.geoserver.org/stable/en/user/security/service.html and changed rest.properties to include only POST method but still GET is allowed. Is there anything missing?
Changing the REST API will only prevent the normal usage of the REST API which will have no effect on WMS and WFS services.
Turning GET access off will prevent the vast majority of WMS clients from accessing your service as a GET request for a getmap endpoint is the standard way to get a WMS map. WFS clients will be less affected as the normal mode of operation is POST. In none of the current OGC services is PUT used so turning that off will have no effect.
Since (pretty much) the whole point of GeoServer is to allow the open and interoperable exchange of data there is no way to turn HTTP methods on or off for OGC services (WMS, WFS etc).
If you are trying to implement some sort of security by obscurity then this will probably not work (for long) and you should set up a proper security system on the getMap or getFeature methods as you need.
If you really (really) must try to cripple the service like this then you can probably do it using nginx or apache as a restricted front end and passing only the "right" requests to GeoServer.
Related
So I have run into a problem. I am building a javacript file, which appends an iframe into clients page (within a div). Lets say, this iframe is loaded from http://example.com/iframe. iframe's backend module (to handle form submission; written in spring) has some endpoints like http://example.com/url1, http://example.com/url2
Now, I want only the iframe to be able to communicate with backend APIs. Currently, I can hit the iframe backend APIs from localmachine too.
I have come across the referer HTTP field, and initially was planning to set a referer filter on the APIs, but later found out that this can easily be spoofed. Will I get any benifit sertting CORS header on the APIs and setting the origin as http://example.com? Will it work, and if yes, is this a safe and dependable solution? Are there any better alternatives?
Our single page app embeds videos from Youtube for the end-users consumption. Everything works great if the user does have access to the Youtube domain and to the content of that domain's pages.
We however frequently run into users whose access to Youtube is blocked by a web filter box on their network, such as https://us.smoothwall.com/web-filtering/ . The challenge here is that the filter doesn't actually kill the request, it simply returns another page instead with a HTTP status 200. The page usually says something along the lines of "hey, sorry, this content is blocked".
One option is to try to fetch https://www.youtube.com/favicon.ico to prove that the domain is reachable. The issue is that these filters usually involve a custom SSL certificate to allow them to inspect the HTTP content (see: https://us.smoothwall.com/ssl-filtering-white-paper/), so I can't rely TLS catching the content being swapped for me with the incorrect certificate, and I will instead receive a perfectly valid favicon.ico file, except from a different site. There's also the whole CORS issue of issuing an XHR from our domain against youtube.com's domain, which means if I want to get that favicon.ico I have to do it JSONP-style. However even by using a plain old <img> I can't test the contents of the image because of CORS, see Get image data in JavaScript? , so I'm stuck with that approach.
Are there any proven and reliable ways of dealing with this situation and testing browser-level reachability towards a specific domain?
Cheers.
In general, web proxies that want to play nicely typically annotate the HTTP conversation with additional response headers that can be detected.
So one approach to building a man-in-the-middle detector may be to inspect those response headers and compare the results from when behind the MITM, and when not.
Many public websites will display the headers for a arbitrary request; redbot is one.
So perhaps you could ask the party whose content is being modified to visit a url like: youtube favicon via redbot.
Once you gather enough samples, you could heuristically build a detector.
Also, some CDNs (eg, Akamai) will allow customers to visit a URL from remote proxy locations in their network. That might give better coverage, although they are unlikely to be behind a blocking firewall.
We have a need to consume an external REST Api and dynamically update content on our website and have ran into the age old problem of cross site scripting and Ajax.
I've read up on JSONP however I don't want to go down that route in a million years as it seems like really a rather dirty hack.
As a solution to this issue is it "right" and "proper" to have a local service that acts as a proxy for any requests to an external Api? So on the client there would be an Ajax call to ../RestProxy/MakeRequest passing it the details of the request it needs to make to the external api, it performs the request and returns anything passed back.
Any thoughts would be appreciated.
There are three ways to do this:
1. JSONP
This is accepted by many popular APIs and frameworks. JQuery makes it easy. I would recommend this.
2. Proxy
Works pretty much as you described. Adds an extra step and server code and server load for you. However, it does allow you to filter or otherwise manipulate the results before sending them to the client.
3. Rely Access-Control-Allow-Origin
This is a header that the server can set to allow you to read json directly from their server even though you aren't on the same domain. This eliminates the need for the jsonp hack, but it requires the the server be setup to support it and it requires a web browser that supports it.
Access-Control-Allow-Origin is supported in:
IE8+
Firefox 3.6+
Safari 4.0+
Chrome 6+
iOS Safari 3.2+
Android browser 2.1+
If you need to support IE7, then this option isn't for you.
I've created a RESTful API that supports GET/POST/PUT/DELETE requests. Now I want my API to have a Javascript client library, and I thought to use JSONP to bypass the cross-domain policy. That works, but of course only for GET requests.
So I started thinking how to implement such a thing and at the same time trying to make it painless to use.
I thought to edit my API implementation and check every HTTP request. If it's a JSONP requests (it has a "callback" parameter in the querystring) I force every API method to be executed by a GET request, even if it should be called by other methods like POST or DELETE.
This is not a RESTful approach to the problem, but it works. What do you think?
Maybe another solution could be to dynamically generate an IFrame to send non-GET requests. Any tips?
There's some relevant points on a pretty similar question here...
JSONP Implications with true REST
The cross-domain restrictions are there for a reason ;-)
Jsonp allows you to expose a limited, safe, read-only view of the API to cross domain access - if you subvert that then you're potentially opening up a huge security hole - malicious websites can make destructive calls to your API simply by including an image with an href pointing to the right part of the API
Having your webapp expose certain functionality accessed through iframes, where all the ajax occurs within the context of your webapp's domain is definitely the safer choice. Even then you still need to take CSRF into consideration. (Take a look at Django's latest security announcement on the Django blog for a prime example - as of a release this week all javascript calls to a Django webapp must be CSRF validated by default)
The Iframe hack is not working anymore on recent browsers, do not use it anymore (source : http://jquery-howto.blogspot.de/2013/09/jquery-cross-domain-ajax-request.html)
Is it possible to directly access third party web services using Ajax? Mostly I've seen that the website I'm visiting handles it on its server and then transfers the processed/unprocessed data to client browser. Is this always the case?
(yes, almost always)
Typically, when you're trying to accomplish accessing third party web services a proxy server is used to access those services. You can't reach external third party web services because they exist on separate domains and you run into the "Same Origin Policy"
Now.... there are methods for doing cross-domain ajax, but the service you are accessing must support it (there are restrictions on what kinds of data can be returned and how the requests are formatted due to the way cross domain ajax works)
A simple way to do this is indeed by using some sort of server-side proxy for your request. It works like this. You do the Ajax request to your own domain, lets say proxy.php. proxy.php handles your request, forwards it to the 3rd party service and returns te results. This way you don't get the cross-domain errors. You can find multiple examples of these simple proxy's by using the magic Google.