It has been discussed many times on Stackoverflow that by default WebRTC technology leaks your real IP even if your using a proxy to browse the web. What I haven't seen discussed is whether this requires the end user to click a button to enable this kind of leak or whether the leak occurs regardless of any action taken by the user.
For example, when you go to Express VPN they require you press a button to test for WebRTC leak. My question is - is this done for privacy reasons or somehow the button activates WebRTC tech so it can leak your IP?
In other words, assuming you never need to use WebRTC tech (just browser a blog or eCommerce shop) and all you do is click a few links - can a website still detect your real IP through WebRTC?
Thanks
Yes, a browser can detect your public IP address using WebRTC.
No, the leak is not reliant on your button interaction.
Recently, I found an unpatched github repo webrtc-ip, which can leak a user's public IP address using WebRTC. This is powerful because you cannot trace it, as nothing is shown in the Networks tab.
Sadly, this leak does not work for private IPs, due to the gradual shift to mDNS (at least for WebRTC), which is described completely in this great blog. Anyways,a here's a working demo:
https://webrtc-ip.herokuapp.com/
I am not sure if this leaks your true IP address even if you are using a proxy, but feel free to test it out.
Related
I am currently writing an app that is planned to control a machine. The machine is controlled by a Raspberry Pi, which offers an API (via flask) to the local wifi. The app on the other hand is also connected to the same wifi and accesses the API.
To make sure that not everybody who downloads the app and is connected to the wifi, can control the machine, I setup some basic authentication.
My next step was actually to switch to https with a self-signed certificate. But the machine(/the raspberry pi) and the app need to be in the same wifi to communicate. So there are actually no intermediaries in the communication. This again makes me wonder if there is any possibility of a man-in-the-middle-attack and if I really need https communication.
So my question is: do I need https here?
A subjective answer. First you have to decide what is the risk to your machine if someone/thing gets control of it. For most consumer applications, within the household maybe that risk is low (maybe not - what about an irrigation controller or heater?). Then why and with what probability would someone WANT to hack in (maybe if your machine is a best seller across the globe it might be a fun target).
You might be surprised at how many devices are on a normal households wifi - dozens at least. Furthermore - while most consumer devices don't rely on inbound access (most use a website to bounce control/commands through) there are probably a lot more inbound (from the internet) ports that have been opened through firewalls than you imagine.
So - I do think there are many opportunities for MITM in a normal household wifi. Whether that would be a concern in early product development - that's up to you.
This SO answer: Is it possible to prevent man-in-the-middle attack when using self-signed certificates? might be useful when actually implementing.
i'm too beginner in squid. i want a way to remain anonymous over the net. i also want to be able to access the contents of the internet which are filtered. my Windows computer is beyond firewall (filtered). my server (CentOS 5) is not. for example, when i enter http://facebook.com in the browser url, it redirects to an intranet ip which tells me to avoid going to this site!
now i've installed squid on server and traffic is propagated through this server. but this redirection occurs. so still i can't open filtered sites.
what can i do? a friend of mine told that the only way is to use https. ie. the connection between browser (Firefox) and the server must use this protocol. is it right? and how can i do that?
what's your suggestion? i don't want necessarily to use squid. besides, https protocol gets banned or decreased in speed in my country sometimes. so i prefer the protocol remain http. i thought also about writing a code in client and server to transform, compress/decompress and packetize as hoax binary http packets to be sent as much speed and success as possible. but i'm not an expert in this context and now i prefer more straightforward ways.
i respect any help/info.
I assume you are located in Iran. I would suggest using TOR if you mainly access websites. The latest release works reasonably well in Iran. It also includes an option to obfuscate traffic so it is not easily detectable that you are using TOR.
See also this question: https://tor.stackexchange.com/questions/1639/using-tor-in-iran-for-the-first-time-user-guide
A easy way to get the TOR package is using the autoresponder: https://www.torproject.org/projects/gettor.html
In case the website is blocked, it works as follows:
Users can communicate with GetTor robot by sending messages via email.
Currently, the best known GetTor email address is gettor#torproject.org.
This should be the most current stable GetTor robot as
it is operated by Tor Project.
To ask for Tor Browser a user should send an email to GetTor robot
with one of the following options in the message body:
windows: If the user needs Tor Browser for Windows.
linux: If the user needs Tor Browser for Linux.
osx: If the user needs Tor Browser for Mac OSX.
During frontend development, we have optimized network environment, network lagging isn't an issue for developer.
But once deployed, the site receives feedback from users who are suffering from network logging. They might not receive an timed AJAX response, they might be blocked by one large script/image loading, they might not load the required JS for the site to function.
I want to test our site with bad network condition, so the question is how could we imitate bad networking in our develop environment?
Use a BSD machine with dummynet. dummynet is awesome and exactly what you're looking for. You'll need a machine with two NIC's to "route" your traffic through running a BSD variant (FreeBSD, OS X).
I’m using the Net panel in Firebug to evaluate the performance of web pages I’m writing.
Specifically, I’m wondering what the precise meaning is of the stages for each resource that’s downloaded (i.e. DNS lookup, Connecting, Blocking, Sending, Waiting, receiving).
But more generally, is there a Firebug guide where I can look this stuff up?
The various stages correspond to the various states of the connection being made for the resource. I don't know of any documents on them and a quick look around the Firebug network page doesn't show any explanations. There is some documentation in the resources area (wiki) of the Firebug site, though it looks like its subtly different than what is actually presented in the interface. They seem reasonably obvious to me, but I suppose I could be wrong, too.
DNS lookup - the name of the remote server is being resolved to an IP address
Connecting - a TCP/IP connection is being opened to the remote server
Blocking - the client is waiting for another request to complete (or a thread to become available) before sending the request
Sending - the client is sending data to the remote server
Waiting - the client is waiting on a response from the remote server
Receiving - the client is reading data from the remote server
You can read up on HTTP headers.
And for the whole firebug net panel you can read this.
Although it doesn’t include an answer to this question, Amy Hoy and Thomas Fuchs’s PDF ebook JavaScript Performance Rocks! has a lot of good information about measuring web page performance using Firebug
The Performance Golden Rule from Yahoo's performance best practices is:
80-90% of the end-user response time
is spent downloading all the
components in the page: images,
stylesheets, scripts, Flash, etc.
This means that when I'm developing on my local webserver it's hard to get an accurate idea of what the end user will experience.
How can I simulate latency so that I can understand how my application will perform when I've deployed it on the web?
I develop primarily on Windows, but I would be interested in solutions for other platforms as well.
A laser modem pointed at the mirrors on the moon should give latency that's out of this world.
Fiddler2 can do this very easily. Plus, it does so much more that is useful when doing development.
YSlow might help you out. YSlow analyzes web pages based on Yahoo!'s rules.
Firefox Throttle. This can throttle speed (Windows only).
These are plugins for Firefox.
You can just set up a proxy outside that will tunnel traffic from your web server to it and then back to local browser. It would be quite realistic (of course it depends where you put the proxy).
Otherwise you can find many ways to implement it in software..
Run the web server on a nearby Linux box and configure NetEm to add latency to packets leaving the appropriate interface.
If your web server cannot run under Linux, configure the Linux box as a router between your test client machine and your web server, then use NetEm anyway
While there are many ways to simulate latency, including some very good hardware solutions, one of the easiest for me is to run a TCP proxy in a remote location. The proxy listens and then directs the traffic back to my final destination. On a remote server, I run a unix program called balance. I then point this back to my local server.
If you need to simulate for a just a single server request, a simple way is to simply make the server sleep() for a second before returning.