privateNetworkClientServer error when using geoserver with cesium - https

I have a geoserver hosting imagery over http. my client site uses https. I've been using openlayers and it all works perfectly. now i'm trying to move to cesiumjs and i'm not getting any imagery in IE or Edge (i am unable to test other browsers unfortunately). i can get imagery in my client if using bing maps, so the client code is functional otherwise. from the browser console i see:
SEC7117: Network request to http://[myserver]:8080/geoserver/cite/wms?service=WMS&version=1.1.1&request=GetMap&styles=&format=image%2Fjpeg&layers=cite%3Abmpyramid&srs=EPSG%3A3857&bbox=195678.7924100496%2C156543.03392804041%2C234814.55089206248%2C195678.7924100496&width=256&height=256 did not succeed. This Internet Explorer instance does not have the following capabilities: privateNetworkClientServer
and:
SEC7111: HTTPS security is compromised by http://[myserver]:8080/geoserver/cite/wms?service=WMS&version=1.1.1&request=GetMap&styles=&format=image%2Fjpeg&layers=cite%3Abmpyramid&srs=EPSG%3A3857&bbox=195678.7924100496%2C195678.7924100496%2C215246.6716510579%2C215246.6716510579&width=256&height=256
the URLs are good; i can copy/paste into a new browser and get tiles back. from the browser network tab of the dev tools i can see there are no outgoing image requests.
does anybody know of a way to get around this?

Despite the cryptic error messages, it seems this is not an HTTP/HTTPS issue like I thought; it's a Cross Origin Resource Sharing (CORS) problem specific to WebGL/CesiumJS. It is summarized near the bottom of this page: https://cesiumjs.org/tutorials/Imagery-Layers-Tutorial/ .
Basically there are two options. First, you can enable CORS in your geoserver. I confirmed this did indeed resolve the issue in my dev environment. However, this is not really an option for us in prod.
The other option is to set up a proxy where instead of cesium directly requesting tiles, it requests them from your own web server and your web server fetches them manually. when going this route, you modify your cesium like so:
layers.addImageryProvider(new Cesium.ArcGisMapServerImageryProvider({
url : '//server.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer',
proxy : new Cesium.DefaultProxy('/proxy/')
}));

Related

why does three.js loader requires webserver only for certain browsers?

After playing around a bit with three.js and the gltf loader I have noticed that Mozilla and Edge don't require the gltf file to be located in a web server, but ie11 does. Could anyone explain me why is this?
Thanks
This happens because of security restrictions in browsers. For example if you try load a glTF asset directly from file via the file protocol (file:///), Chrome logs the following error:
Access to XMLHttpRequest at 'file:///...DamagedHelmet.gltf' from origin 'null' has been blocked by CORS policy: Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https.
The behavior of browsers varies since they do not implement equal security policies.
In order to avoid security related problems, three.js recommends to use a local web server when developing/testing WebGL applications that load assets from external files.

How to validate that a certain domain is reachable from browser?

Our single page app embeds videos from Youtube for the end-users consumption. Everything works great if the user does have access to the Youtube domain and to the content of that domain's pages.
We however frequently run into users whose access to Youtube is blocked by a web filter box on their network, such as https://us.smoothwall.com/web-filtering/ . The challenge here is that the filter doesn't actually kill the request, it simply returns another page instead with a HTTP status 200. The page usually says something along the lines of "hey, sorry, this content is blocked".
One option is to try to fetch https://www.youtube.com/favicon.ico to prove that the domain is reachable. The issue is that these filters usually involve a custom SSL certificate to allow them to inspect the HTTP content (see: https://us.smoothwall.com/ssl-filtering-white-paper/), so I can't rely TLS catching the content being swapped for me with the incorrect certificate, and I will instead receive a perfectly valid favicon.ico file, except from a different site. There's also the whole CORS issue of issuing an XHR from our domain against youtube.com's domain, which means if I want to get that favicon.ico I have to do it JSONP-style. However even by using a plain old <img> I can't test the contents of the image because of CORS, see Get image data in JavaScript? , so I'm stuck with that approach.
Are there any proven and reliable ways of dealing with this situation and testing browser-level reachability towards a specific domain?
Cheers.
In general, web proxies that want to play nicely typically annotate the HTTP conversation with additional response headers that can be detected.
So one approach to building a man-in-the-middle detector may be to inspect those response headers and compare the results from when behind the MITM, and when not.
Many public websites will display the headers for a arbitrary request; redbot is one.
So perhaps you could ask the party whose content is being modified to visit a url like: youtube favicon via redbot.
Once you gather enough samples, you could heuristically build a detector.
Also, some CDNs (eg, Akamai) will allow customers to visit a URL from remote proxy locations in their network. That might give better coverage, although they are unlikely to be behind a blocking firewall.

How to maintain session in swagger editor

I am using swagger for building API documentation. I am new to it. What I am doing is logging in first and the giving call to a list API. But as the session, cookies are not maintained, the list call is not getting executed. You need to first login to the application and then give call to the list.
How can I do it in swagger editor http://editor.swagger.io/#/
Thankyou
I just spent some time struggling with this same question and as far as I can tell the Swagger Editor will not make API calls with xhr.withCredentials = true, which means that your browser will not send cookies even if the server's CORS policy allows it. There's been some discussion of updating Swagger Editor to allow an option to do this (e.g., https://github.com/swagger-api/swagger-js/issues/251), but it doesn't appear this has been done quite yet.
Your best option is probably to avoid the entire cross-origin security issue by hosting Swagger Editor on your own domain. This is one of the suggestions: https://github.com/swagger-api/swagger-editor/blob/master/docs/cors.md#host-swagger-editor-in-your-own-domain. Just run it as if it was one of your own apps on your dev site.
Another suggestion is to just disable Chrome web security: https://github.com/swagger-api/swagger-editor/blob/master/docs/cors.md#run-swagger-editor-in-a-browser-that-ignores-http-access-control. I haven't tried this but it may not work because Swagger Editor is still not setting the XHR request appropriately.
Finally, if you're familiar with docker, you can run a Swagger Editor docker instance and link everything up. This is complicated but I accomplished this by:
running our backend inside a docker container
running a Swagger Editor instance inside another docker container on the same network (https://hub.docker.com/r/swaggerapi/swagger-editor/)
running an nginx reverse proxy server inside a third docker container such that any requests to, e.g., http://localhost/dev/swagger-editor/ are proxy-forwarded to the swagger docker container and any requests to http://localhost/api/ are proxy-forwarded to your backend. This way the browser only sees requests to localhost and everything works just fine.
But you need to be willing to get into docker and nginx reverse proxy configs before having a reasonable expectation you'll be successful with this. Otherwise this could be another rabbit hole. Good luck!

How to Detect Anonymity of Proxy?

When adding a HTTP Proxy within my Firefox Options panel, I have noticed sometimes when Querying google for My Ip the result would return my real IP, whilst other times it would return the Ip of the Proxy I applied.
When obtaining a Proxy they are mostly assigned a type which is commonly refered to as
Elite (webserver cannot detect you are using a proxy)
Anonymous (Web server can detect you are using a proxy but not your real IP)
Transparent (web server can find your real ip)
After doing some research I have found that some proxies send/apply the following headers
HTTP_CLIENT_IP:
HTTP_FORWARDED:
HTTP_X_FORWARDED_FOR: 11.11.11.11:62728
HTTP_VIA:
HTTP_PROXY_CONNECTION:
When browsing with a Proxy applied, I have tried to search my headers using Firefox Extention LiveHttpHeaders, but I am unable to see any of the following headers - yet Google is able to detect my real IP.
How can I search for these headers?
With your proxy applied, point your browser to http://request.urih.com/. This page will show all of the headers in the HTTP request, including those that you copied in your question if they are there.

Azure and CORS Access-Control-Allow-Origin with ajax and php

First I'm not in the web side of our world, so be nice with the backend guy.
A quick background : For a personal need I've developped a google chrome extension. They are basically a webpage loaded in a chrome windows and... yeah that's it. Everything is on the client side (scripts, styles, images, etc...) Only the data are coming from a server through ajax calls. A cron job call a php script every hours to generate two files. One, data.json contains the "latest" datas in a json format. Another one hash.json contain the hash of the data. The client chrome application use local storage. If the remote hash differ from the local one, he simply retrieve the data file from the remote server.
As I have a BizSpark account with Azure my first idea was : Azure Web Site with php for the script, a simple homepage and the generated file and the Azure Scheduler for the jobs.
I've developed everything locally and everything is running fine... but once on the azure plateform I get this error
XMLHttpRequest cannot load http://tso-mc-ws.azurewebsites.net/Core/hash.json. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:23415' is therefore not allowed access.
But what I really can't understand is that I'm able (and you'll be too) to get the file with my browser... So I just don't get it... I've also tried based on some post I've found on SO and other site to manipulate the config, add extra headers, nothing seems to be working...
Any idea ?
But what I really can't understand is that I'm able (and you'll be
too) to get the file with my browser... So I just don't get it
So when you type in http://tso-mc-ws.azurewebsites.net/Core/hash.json in your browser's address bar, it is not a cross-domain request. However when you make an AJAX request from an application which is running in a different domain (http://localhost:23415 in your case), that's a cross-domain request and because CORS is not enabled on your website, you get the error.
As far as enabling CORS is concerned, please take a look at this thread: HTTP OPTIONS request on Azure Websites fails due to CORS. I've never worked with PHP/Azure Websites so I may be wrong with this link but hopefully it should point you in the right direction.
Ok, will perhap's be little troll answer but not my point (I'm .net consultant so... nothing against MS).
I pick a linux azure virtual machine, installed apache and php, configure apache, set some rights and define the header for the CROS and configure a cron in +/- 30minutes... As my goal is to get it running the problem is solved, it's running.

Resources