How to Detect Anonymity of Proxy? - proxy

When adding a HTTP Proxy within my Firefox Options panel, I have noticed sometimes when Querying google for My Ip the result would return my real IP, whilst other times it would return the Ip of the Proxy I applied.
When obtaining a Proxy they are mostly assigned a type which is commonly refered to as
Elite (webserver cannot detect you are using a proxy)
Anonymous (Web server can detect you are using a proxy but not your real IP)
Transparent (web server can find your real ip)
After doing some research I have found that some proxies send/apply the following headers
HTTP_CLIENT_IP:
HTTP_FORWARDED:
HTTP_X_FORWARDED_FOR: 11.11.11.11:62728
HTTP_VIA:
HTTP_PROXY_CONNECTION:
When browsing with a Proxy applied, I have tried to search my headers using Firefox Extention LiveHttpHeaders, but I am unable to see any of the following headers - yet Google is able to detect my real IP.
How can I search for these headers?

With your proxy applied, point your browser to http://request.urih.com/. This page will show all of the headers in the HTTP request, including those that you copied in your question if they are there.

Related

privateNetworkClientServer error when using geoserver with cesium

I have a geoserver hosting imagery over http. my client site uses https. I've been using openlayers and it all works perfectly. now i'm trying to move to cesiumjs and i'm not getting any imagery in IE or Edge (i am unable to test other browsers unfortunately). i can get imagery in my client if using bing maps, so the client code is functional otherwise. from the browser console i see:
SEC7117: Network request to http://[myserver]:8080/geoserver/cite/wms?service=WMS&version=1.1.1&request=GetMap&styles=&format=image%2Fjpeg&layers=cite%3Abmpyramid&srs=EPSG%3A3857&bbox=195678.7924100496%2C156543.03392804041%2C234814.55089206248%2C195678.7924100496&width=256&height=256 did not succeed. This Internet Explorer instance does not have the following capabilities: privateNetworkClientServer
and:
SEC7111: HTTPS security is compromised by http://[myserver]:8080/geoserver/cite/wms?service=WMS&version=1.1.1&request=GetMap&styles=&format=image%2Fjpeg&layers=cite%3Abmpyramid&srs=EPSG%3A3857&bbox=195678.7924100496%2C195678.7924100496%2C215246.6716510579%2C215246.6716510579&width=256&height=256
the URLs are good; i can copy/paste into a new browser and get tiles back. from the browser network tab of the dev tools i can see there are no outgoing image requests.
does anybody know of a way to get around this?
Despite the cryptic error messages, it seems this is not an HTTP/HTTPS issue like I thought; it's a Cross Origin Resource Sharing (CORS) problem specific to WebGL/CesiumJS. It is summarized near the bottom of this page: https://cesiumjs.org/tutorials/Imagery-Layers-Tutorial/ .
Basically there are two options. First, you can enable CORS in your geoserver. I confirmed this did indeed resolve the issue in my dev environment. However, this is not really an option for us in prod.
The other option is to set up a proxy where instead of cesium directly requesting tiles, it requests them from your own web server and your web server fetches them manually. when going this route, you modify your cesium like so:
layers.addImageryProvider(new Cesium.ArcGisMapServerImageryProvider({
url : '//server.arcgisonline.com/ArcGIS/rest/services/World_Street_Map/MapServer',
proxy : new Cesium.DefaultProxy('/proxy/')
}));

Does Google Universal Analytics support x-forwarded-for header for ip filters?

Does Google Universal Analytics support x-forwarded-for header for ip filters?
I am trying to filter out internal ips but it doesn't seem to work. The application is sitting behind a proxy, so the client's ip is in the x-forwarded header.
According to this question the measurement protocol (which forms the base for all versions of the Google Analytics tracking) ignores x-forwarded ip headers.
This is solved in serverside implementations as Google has added a parameter uip (must be a valid IP, will always be anonymized by setting the last three numbers to 000) to the measurement protocol to send a clients ip. I don't know terribly much about proxy servers, but it might be possible to stitch the parameter into the request instead of using a header field.
I haven't actually tested this solution yet, but I'm working on it.
It seems like you should be able to do something like this:
import ua from 'universal-analytics'
import publicIp from 'public-ip'
const user = ua(GOOGLE_ANALYTICS_ID)
const uip = await publicIp.v4()
user.set('uip', uip)
After that, you have set the users IP for all coming events and page views.

Forwarding HTTP headers using Juniper

I'm working with a sysadmin that uses a Juniper solution that behaves as a proxy. I have no idea what it is, but here's a picture of the web interface: http://imagebin.ca/v/1UKN1jGYPUWd
Through that proxy, I'm trying to use Sharepoint's REST API, unfortunately there are some headers (such as X-RequestDigest) that Juniper's proxy doesn't forward to Sharepoint.
Basically, I need the equivalent of nginx's proxy_pass_request_headers for Junipers' applications.
The sysadmin doesn't seem to know what HTTP header forwarding is, or how to configure it. Can anyone identify the solution he's using from the picture ? Does anyone know where to find documentation about this ?
Further to my comment added above, there appears to be no way to implicitly pass variables around. You can tell the current IVEOS images that the Web URL you're linking to is a Sharepoint Site, and it'll do "clever" things with the URL, but I'm not exactly sure what you want it to do, and whether they'll handle it.
Here are the screen shots for the "Sharepoint" configuration panels on the Web Resources page. As I'm not a Sharepoint Admin, I can't tell you whether these are useful to you or not.
I hope it helps!
You may be looking for the Web Resource custom header policy
https://www.juniper.net/documentation/en_US/sa8.0/topics/task/operational/secure-access-web-rewrite-custom-header-policy.html
Edit: The first resource became a dead link. New link: https://www.juniper.net/techpubs/en_US/nsm2012.2/topics/task/configuration/remote-management-secure-web-resource-policy-configuring-nsm.html
Fur custom headers (to send some user information) we've used the "Web Rewriting Resource Policy"
SSO Cookies/Headers > General tab -> Headers and Values
to pass custom user data (user name, role, client certificate).
I assume you have the backend application (sharepoint) configured as the a PTP (PassthroughProxy) we bresource. I am pretty confident that only standard HTTP headers are passed to the backend by default :(
To pass all custom headers I found following book (Juniper(r) Networks Secure Access SSL VPN Configuration Guide): https://books.google.be/books?id=5OYf6u5vzFsC&pg=PA369&lpg=PA369&dq=Juniper+pass+custom+headers&source=bl&ots=s5oF5NEKjP&sig=8091EV2Pyw6pIFQifMOIR2pLpLk&hl=de&sa=X&ved=0ahUKEwiFwpf6m_DOAhWFWRQKHXoRD0EQ6AEIPDAE
where it says
Passing custom headers can be enabled by:
Users > Resource Polities > Web > Custom Headers
This option may not be visible on the admin interface by default, it needs to be enabled:
Users > Resource Policy > Web > Web ACL and there's a "Customize" button

Debugging KML in GE client

Working with Google Earth 6.2.2.6613 client and KML files and I have a need for debugging .
I have situation where GE reports my KML file has made an "invalid HTTP request" and it displays the offending URL.
I can cut and paste the URL into a web browser and it returns the expected result.
So the question is " How can you get useful debugging information from the GE client?"
For example "invalid HTTP request" ? How ? whats invalid ? Does GE client run/have a debug log or mode ?
I am using Windows 7 Professional 64bit but I will need to test other versions ( Mac OS X ) in the future.
While a web proxy ( or my own globe server ) would allow me to see the http traffic , I need to see what happens in between requests in the GE client.
Google Earth itself doesn't log what it does internally or have a debug mode to enable such logging. You can enable 'KML Error Handling' in the Tools/Options/General menu which may give more information for invalid KML but validating the KML is best done with something like the KML Validator.
So the easiest way to debug Google Earth HTTP access is using a network analyzer such as the Fiddler Web Debugger to identify the network traffic.
Fiddler runs as an HTTP proxy and captures all web access showing the full HTTP request and response information. Just click 'Capture Traffic' and then launch Google Earth to capture all HTTP traffic.
You can capture the HTTP session with hits to kh.google.com, mw1.google.com, khmdb.google.com, mw2.google.com, and so one with the full URL, HTTP headers for request and response, etc. There are many options for multiple views, filtering, decoding, timing information, and more. You'll see the selected Layers being downloaded as KMZ files.
Sample Web session
Result Port Host URL
200 HTTP kh.google.com /geauth?ct=free
200 HTTP Tunnel to www.google.com:443
200 HTTP Tunnel to accounts.google.com:443
200 HTTP kh.google.com /flatfile?q2-0-q.534
200 HTTP mw1.google.com /mw-earth-vectordb/photos/360cities/360cities.kmz
200 HTTP mw1.google.com /mw-weather/base/files/kml/weather_en.kmz
...
After debugging you can stop Fiddler which restores the HTTP proxy settings back to normal.
I use this tool to quickly see what Google Earth is doing behind the scenes. It is easy to use and very friendly.
http://www.fiddler2.com/fiddler2/

Cross Domain request for service using SproutCore

I have been trying to get this resolved, without any success.
I have a webapp residing on my domain, say www.myDomain.com. I need to call a service which is present on another domain, say www.anotherDomain.com/service.do?
I'm using SproutCore's SC.Request.getUrl(www.anotherDomain.com/service.do?) to call that service.
I get an error that says, Origin www.myDomain.com is not allowed by access-control-allow-origin.
When I was in dev stages, and using sc-server, the issue was resolved using proxies. Now that I have deployed the app to an actual server, I replaced all the lines where I had set up the proxy with the actual domain name. I have started getting that error again.
The problem is that I CANNOT MAKE ANY CHANGES to the server on the other domain. All the posts that I have come across state that the other server on the other domain ought to provide access-control-allow-origin header and that it ought to support the OPTIONS verb.
My question is, is it possible for me to connect to that service using SproutCore's SC.Request.getUrl() method?
Additionally, the other posts that I have read mentioned that a simple GET request ought not to be preflighted. Why then are my requests going as OPTION instead of GET?
Thanks a ton in advance! :D
This is not a Sproutcore issue; it's a javascript Same Origin Policy issue.
If you can't modify the production server, you have no option but to develop your own proxy server, and have your proxy hit the real service.
This is effectively replacing sc-server in your production environment.
All this server would do is take the incoming request and pass it along to www.anotherDomain.com/?service.do.
You would need to make sure you passed all parameters, cookies, headers, the http verb, etc....
This is far from ideal, because now errors can occur in more places. Did the real service fail? Did the proxy fail? etc.
If you could modify the other domain, you could
1) deploy your SC app there.
2) put in the CORS headers so you could make cross domain requests

Resources