Best whitelist capable http proxy for Windows? - windows

I would like to setup a http proxy on my work machine (no admin rights, WinXP) to only allow access to a whitelist of URLs. What would be the easiest solution? I prefer open-source software if possible.

Squid seems to be the de facto proxy. This link describes how to set it up on a windows box: http://www.ausgamers.com/features/read/2638752

Why not use the Content Advisor in IE? You can provide a list of approved sites, anything else is blocked. Or do you want pass-through functionality like a true proxy?

Content advisor will ask for authorization every time a javascript function is called. At least that's my experience right now, and that's how I landed here, after hours of googling.
You are right, however, if the sites in the whitelist don't use javascript intensively and I would suggest that that option be tried first because (and I'm an IT person), it's FAAAAAAAAR easier to set up Content Advisor than a proxy server. Google "noaccess.rat" and you'll come accross articles that tell you how to set up IE using a white-list approach.
Having said this, however, you must be fully aware that Content Advisor can be easily disabled, even without knowing the password. One of my users did it in no time. You can find this in google as well.
Alex

Related

Are there any ways to monitor all HTTP protocols and block certain ones using a single script on WIndows?

I want to write a program that can monitor all system HTTP/HTTPS protocols used to open the default browser, and block certain ones, automatically changing certain requested URL into another. The process of changing a URL is simple, but the monitoring and blocking part is quite puzzling.
e.g. When clicking on the URL 'https://example.com/asdf.htm', the request will be blocked by the program and the the Windows system will receive the command of 'http://www.example2.org/asdf.htm' instead and the latter instead of the former URL will be opened by the default browser.
I am an amateur developer and student who do not have much experience in solving such problems.
I searched the web and found someone asked a similar question years ago:
https://superuser.com/questions/554668/block-specific-http-request-from-windows
However, I didn't find any useful advice on coding in the page. Maybe I can use an antivirus program to block certain URLs or change the hosts file to block certain URLs but the URL replacement cannot be done. Certainly, changing the hosts to a certain server which redirects certain requests might work but that's too complex. I wish someone can help me solve the problem by giving a simple method on monitoring the Windows system itself. Thanks!
To summarize our conversation in the comments, in order to redirect or restrict traffic, either to sites, either to ports (protocols are actually "mapped" via ports) the main solutions usually are:
a software firewall - keep in mind that SWFW don't usually redirect, they just permit or allow traffic via ports
a hardware firewall (or advanced router, not the commercial ones, but enterprise grade) - they do what you want, but they are very expensive and not worth for a home experiment
a proxy server - this can do what you want
Other alternatives that might or might not work would include editing the hosts file, as you said, but as stated earlier I don't recommend it, it's a system file and if you forget about it, then it can be a hindrance (also keep in mind that normally you should not use a Windows user with admin rights even at home, but that is another story) and a browser extension (which Iwould guess only changes content on pages, not the way a browser works (such as changing URLs).
I think a proxy server is the best pick here. Try it and let me know.
Keep in mind I still recommend you read about networking in order to get a better idea of what you can and can't do in each setup.

Anybody knows the technology approach behind livelook.com's concept?

Livelook.com "shares" screens without requiring shared parties to download nor install anything. These are my guesses on how it works:
Proxy server keeping track of shared parties who both are browsing the same website (thus not really screen sharing)
Interaction is being "recorded" by host side javascript and then serialized over to the client's javascript (ajax) who will then have to de-serialize the actions from the host and mimic the behaviour.
Anyone else has a take on this?
I did some research on this, and I'm pretty sure it uses Java since that is one of the requirements.
"To use LiveLOOK's screen sharing and co browsing products you will need a browser and java. Java is typically pre-installed on all computers. If your computer d..."
http://www.livelook.com/faq.asp
Here is perhaps how they get access to the screen:
http://www.daniweb.com/software-development/java/code/216988/java-code-to-capture-your-screen-as-image
But I'm a C++ man so, I wouldn't know if there's a better way to do things like that in Java.

Changing domain linked to a Selenium::Client::Driver instance

I'm using the Selenium Client (v 1.2.18) to do automated navigation of retail websites for which there exists no external API. My goal is to determine real-time, site-specific product availability using the "Check Availability" button that exists on a lot of these sites.
In case there's any concern, each of these checks will be initiated by a real live consumer who is actually interested in whether or not something's available at that store. There will be no superfluous requests or other internet badness.
I'm using Selenium's Grid framework so that I can run stuff in parallel and I'm keeping each of the controlled browsers open between requests. The issue I'm experiencing is that I need to perform these checks across a number of different domains, and I won't know in advance which one I will have to check next. I didn't think this would be too big an issue, but it turns out that when a Selenium browser instance gets made, it gets linked to a specific domain and I haven't been able to find any way to change what domain that is. This requires restarting a browser each time a request comes in for a domain we're not already linked to.
Oh, and the reason we're using Selenium instead something more light-weight (eg. Mechanize) is because we need something that can handle JavaScript.
Any help on this would be greatly appreciated. Thanks in advance.
I suppose you are restricted from changing domain because of same origin policy. Did you try using browser with elevated security privileges like iehta for internet explorer and chrome for firefox browsers. While using these modes of browsers, use open method in your tests and pass the URL which you want to open. This might solve your problem.

Personal Internet use monitoring

How could a (Windows) desktop application be created to monitor the amount of time spent on a particular website?
My first idea was to play with the Host file to intercept requests, log, and proxy. This feels a bit clunky; and I suspect my program would look like malware.
I feel like there is a smarter way? Any ideas?
There is a tool similar to what you are looking for called K-9 Web Protection. It is more used for parents to monitor what their kids are up to when hooked up to the internet. I have installed this for my niece's computer with good results and praises as it blocks, content filter, restrict internet times. This may be OTT for your needs but worth a shot as you can see what sites were visited.
The other, is to use a dedicated firewall monitoring solution such as IPCOP which is a Linux based distribution with a sole purpose in providing a proxy, stateful packet inspection (SPI) firewall, Intrusion Detection System (IDS).
Hope this helps,
Best regards,
Tom.
You could do this by monitoring active connections via netstat, or if you need more advanced data you can install The Windows Packet Capture Library and get any data about network use, and inside your desktop app, find network traffic that relates to 'spending time' on a website (which might just be GET requests for you, but I don't know), and record various statistics as required.
Route the traffic through a scriptable proxy and change the browser settings to point to that proxy.

Good practice or bad practice to force entire site to HTTPS?

I have a site that works very well when everything is in HTTPS (authentication, web services etc). If I mix http and https it requires more coding (cross domain problems).
I don't seem to see many web sites that are entirely in HTTPS so I was wondering if it was a bad idea to go about it this way?
Edit: Site is to be hosted on Azure cloud where Bandwidth and CPU usage could be an issue...
EDIT 10 years later: The correct answer is now to use https only.
you lose a lot of features with https (mainly related to performance)
Proxies cannot cache pages
You cannot use a reverse proxy for performance improvement
You cannot host multiple domains on the same IP address
Obviously, the encryption consumes CPU
Maybe that's no problem for you though, it really depends on the requirements
HTTPS decreases server throughput so may be a bad idea if your hardware can't cope with it. You might find this post useful. This paper (academic) also discusses the overhead of HTTPS.
If you have HTTP requests coming from a HTTPS page you'll force the user to confirm the loading of unsecure data. Annoying on some websites I use.
This question and especially the answers are OBSOLETE. This question should be tagged: <meta name="robots" content="noindex"> so that it no longer appears in search results.
To make THIS answer relevant:
Google is now penalizing website search rankings when they fail to use TLS/https. You will ALSO be penalized in rankings for duplicate content, so be careful to serve a page EITHER as http OR https BUT NEVER BOTH (Or use accurate canonical tags!)
Google is also aggressively indicating insecure connections which has a negative impact on conversions by frightening-off would-be users.
This is in pursuit of a TLS-only web/internet, which is a GOOD thing. TLS is not just about keeping your passwords secure — it's about keeping your entire world-facing environment secure and authentic.
The "performance penalty" myth is really just based on antiquated obsolete technology. This is a comparison that shows TLS being faster than HTTP (however it should be noted that page is also a comparison of encrypted HTTP/2 HTTPS vs Plaintext HTTP/1.1).
It is fairly easy and free to implement using LetsEncrypt if you don't already have a certificate in place.
If you DO have a certificate, then batten down the hatches and use HTTPS everywhere.
TL;DR, here in 2019 it is ideal to use TLS site-wide, and advisable to use HTTP/2 as well.
</soapbox>
If you've no side effects then you are probably okay for now and might be happy not to create work where it is not needed.
However, there is little reason to encrypt all your traffic. Certainly login credentials or other sensitive data do. One the main things you would be losing out on is downstream caching. Your servers, the intermediate ISPs and users cannot cache the https. This may not be completely relevant as it reads that you are only providing services. However, it completely depends on your setup and whether there is opportunity for caching and if performance is an issue at all.
It is a good idea to use all-HTTPS - or at least provide knowledgeable users with the option for all-HTTPS.
If there are certain cases where HTTPS is completely useless and in those cases you find that performance is degraded, only then would you default to or permit non-HTTPS.
I hate running into pointlessly all-https sites that handle nothing that really requires encryption. Mainly because they all seem to be 10x slower than every other site I visit. Like most of the documentation pages on developer.mozilla.org will force you to view it with https, for no reason whatsoever, and it always takes long to load.

Resources