We want to implement a MITM proxy.
It should receive https requests from client, decrypt them and
return pre-recorded responses.
It means that the proxy is not connected to remote server directly.
I know that FiddlerCore supports MITM, but how can I possibly use it in my scenario?
Thanks
https://groups.google.com/forum/#!topic/httpfiddler/E0JZrRRGhVg
This is a pretty straightforward task. If you look at the demo project included in FiddlerCore, you can get most of the way there.
Fiddler.FiddlerApplication.BeforeRequest += delegate(Fiddler.Session oS)
{
if (oSession.HTTPMethodIs("CONNECT")) { oSession.oFlags["X-ReplyWithTunnel"] = "Fake for HTTPS Tunnel"; return; }
if (oS.uriContains("replaceme.txt"))
{
oS.utilCreateResponseAndBypassServer();
oS.responseBodyBytes = SessionIWantToReturn.responseBodyBytes;
oS.oResponse.headers = (HTTPResponseHeaders) SessionIWantToReturn.oResponse.headers.Clone();
}
};
Related
Briefly: I created a service on an internet server using Cro and websocket. Very simple using Cro examples. No problem when sending and receiving data from an HTML page when the page is served as localhost. When the page is served using https, the websocket cannot be established.
How is the wss protocol be used with Cro?
Update: After installing cro and running cro stub :secure, the service.p6 has some more code not explicit in the documentation.
More detail:
I have a docker file running on the internet server, Cro is set to listen on 35145, so the docker command is docker --rm -t myApp -p 35145:35145
The service file contains
use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Cro::HTTP::Router;
use Cro::HTTP::Router::WebSocket;
my $host = %*ENV<RAKU_WEB_REPL_HOST> // '0.0.0.0';
my $port = %*ENV<RAKU_WEB_REPL_PORT> // 35145;
my Cro::Service $http = Cro::HTTP::Server.new(
http => <1.1>,
:$host,
:$port,
application => routes(),
after => [
Cro::HTTP::Log::File.new(logs => $*OUT, errors => $*ERR)
]
);
$http.start;
react {
whenever signal(SIGINT) {
say "Shutting down...";
$http.stop;
done;
}
}
sub routes() {
route {
get -> 'raku' {
web-socket :json, -> $incoming {
supply whenever $incoming -> $message {
my $json = await $message.body;
if $json<code> {
my $stdout, $stderr;
# process code
emit({ :$stdout, :$stderr })
}
}
}
}
}
}
In the HTML I have a textarea container with an id raku-code. The js script has the following (I set websocketHost and websocketPort elsewhere in the script) in a handler that fires after the DOM is ready:
const connect = function() {
// Return a promise, which will wait for the socket to open
return new Promise((resolve, reject) => {
// This calculates the link to the websocket.
const socketProtocol = (window.location.protocol === 'https:' ? 'wss:' : 'ws:');
const socketUrl = `${socketProtocol}//${websocketHost}:${websocketPort}/raku`;
socket = new WebSocket(socketUrl);
// This will fire once the socket opens
socket.onopen = (e) => {
// Send a little test data, which we can use on the server if we want
socket.send(JSON.stringify({ "loaded" : true }));
// Resolve the promise - we are connected
resolve();
}
// This will fire when the server sends the user a message
socket.onmessage = (data) => {
let parsedData = JSON.parse(data.data);
const resOut = document.getElementById('raku-ws-stdout');
const resErr = document.getElementById('raku-ws-stderr');
resOut.textContent = parsedData.stdout;
resErr.textContent = parsedData.stderr;
}
When an HTML file with this JS script is set up, and served locally I can send data to the Cro app running on an internet server, and the Cro App (running in a docker image) processes and returns data which is placed in the right HTML container. Using Firefox and the developer tools, I can see that the ws connection is created.
But when I serve the same file via Apache which forces access via https, Firefox issues an error that the 'wss' connection cannot be created. In addition, if I force a 'ws' connection in the JS script, Firefox prevents the creation of a non-secure connection.
a) How do I change the Cro coding to allow for wss? From the Cro documentation it seems I need to add a Cro::TLS listener, but it isn't clear where to instantiate the listener.
b) If this is to be in a docker file, would I need to include the secret encryption keys in the image, which is not something I would like to do?
c) Is there a way to put the Cro app behind the Apache server so that the websocket is decrypted/encrypted by Apache?
How do I change the Cro coding to allow for wss? From the Cro documentation it seems I need to add a Cro::TLS listener, but it isn't clear where to instantiate the listener.
Just pass the needed arguments to Cro::HTTP::Server, it will set up the listener for you.
If this is to be in a docker file, would I need to include the secret encryption keys in the image, which is not something I would like to do?
No. You can keep them in a volume, or bind-mount them from the host machine.
Is there a way to put the Cro app behind the Apache server so that the websocket is decrypted/encrypted by Apache?
Yes, same as with any other app. Use mod_proxy, mod_proxy_wstunnel and a ProxyPass command. Other frontends such as nginx, haproxy, or envoy will also do the job.
Though is not a pure cro solution, but you can
run your cro app on (none ssl/https) http/web socket port - localhost
and then have an Nginx server (configured to serve https/ssl trafic) to handle incoming public https/ssl requests and bypass them
as a plain http traffic to your app using
nginx reverse proxy mechanism (this is also often referred as an ssl termination), that way you
remove a necessity to handle https/ssl on cro side.
The only hurdle here might be if a web sockets
protocol is handled well by Nginx proxy. I’ve never tried that but probably you should be fine according to the Nginx docs - https://www.nginx.com/blog/websocket-nginx/
Similar to how one could use a Fiddler script to redirect outgoing requests to a different URL (example below), I would like to redirect outgoing requests to certain URLs to localhost or another URL.
var list = [ "https://onetoredirect.com", "https://twotoredirect.com" ]
static function OnBeforeRequest(oS: Session) {
if(oS.uriContains("http://URLIWantToFullyBlock.com/")){
oS.oRequest.FailSession(404, "Blocked", "");
}
for(var i = 0; i < 2;i++) {
if(oS.uriContains(list[i])) {
oS.fullUrl = oS.fullUrl.Replace("http://", "https://");
oS.host = "localhost"; // This can also be replaced with another IP address.
break;
}
}
}
Problem is that I need to do this for a program that I do not have access to, so I cannot just edit the program to send to these new URLs. The two vague ideas I had were
A script/program that runs system-wide and redirects the requests
A script/program that watches my specific process (I have the ability to launch the process programmatically if need be) for these requests and redirects them
Either is viable, obviously I would prefer doing whichever is easier or more versatile lol.
I want to write this as part of a launcher for a game, where you can either use my launcher which would launch the game with the redirection on, or launch the game normally and have the redirection off (to play normally), essentially removing any need for user modification. It is also okay for the solution to be Windows only since the game is Windows only at the moment!
I ended up setting up a proxy with mitmproxy with a custom script, then setting the Windows proxy settings to go through localhost automatically!
I'm investigating the idea of using Blazor WASM to build a retail application that would run on an office Intranet. The application would be installed on a given machine, to be accessed via browser from any of several machines on the LAN.
The biggest stumbling block I'm running into is the question of how to go about securing the channel.
The app itself would run as a Windows Service, listening on port 443 on one of the workstations, e.g. https://reception/. But how do we tell Blazor to use a self-signed TLS cert for that hostname?
If there's a better way to go about this, I'm all ears. I can't use Let's Encrypt certs, because neither the application nor its hostname will be exposed to the public Internet.
There is a glut of information on working with Blazor to build such an app, but most if not all demos run on localhost. That works fine for dev, but not for production (in a self-hosting scenario, anyway). There doesn't seem to be much discussion at all of this aspect of things.
How can we use a custom certificate for browser requests from the client to a Blazor WASM app?
Any ideas?
I was able to get this working using some slightly modified sample code from the official documentation:
builder.WebHost.ConfigureKestrel(serverOptions =>
{
serverOptions.ListenAnyIP(443, listenOptions =>
{
listenOptions.UseHttps(httpsOptions =>
{
var testCert = CertificateLoader.LoadFromStoreCert(
"test", "My", StoreLocation.CurrentUser,
allowInvalid: true);
var certs = new Dictionary<string, X509Certificate2>(
StringComparer.OrdinalIgnoreCase)
{
["test"] = testCert
};
httpsOptions.ServerCertificateSelector = (connectionContext, name) =>
{
if (name is not null && certs.TryGetValue(name, out var cert))
{
return cert;
}
return testCert;
};
});
});
});
The easiest way to handle SSL is to use IIS that will act as a proxy for your Blazor app.
IIS will give you easy access to well documented SSL settings.
https://learn.microsoft.com/en-us/aspnet/core/blazor/host-and-deploy/webassembly?view=aspnetcore-6.0#standalone-deployment
According to the WebSocketTransformer docs, it says it tries to upgrade HttpRequests according to the RFC6455 web socket standard:
This transformer strives to implement web sockets as specified by RFC6455.
And provides this Dart example code:
HttpServer server;
server.listen((request) {
if (...) {
WebSocketTransformer.upgrade(request).then((websocket) {
...
});
} else {
// Do normal HTTP request processing.
}
});
Now if you search through PhantomJS' issue tracker you can find issue:
11018 Update to final websocket standard
Which basically says that the latest PhantomJS (1.9.7) uses an old web socket standard (I still haven't figured out what version sends out the Sec-WebSocket-Key1 information, but I assume its not the RFC6455 version).
So basically, my problem is that when I run PhantomJS headless browser towards my site that uses Dart 1.3.3, websocket server implementation (basically some upgrade code as I pasted above), it says:
Headers from PhantomJS:
sec-websocket-key1: 327J w6iS/b!43 L2j5}2 2
connection: Upgrade
origin: http://mydomain.com
upgrade: WebSocket
sec-websocket-key2: 42 d 7 64 84622
host: mydomain.com
Dart:
WebSocketTransformer.isUpgradeRequest(request) = false
WebSocketException: Invalid WebSocket upgrade request
The upgrade of the request failed (I assume it because of the mis match of versions).
My question is, until Phantom JS gets updated with 2.0, is there a way I can fix my Dart back-end so it would handle PhantomJS websockets as well?
According to the docs of WebSocketTransformer, the upgrade function has two arguments, one HttpRequest mandatory, and a second optional argument:
static Future<WebSocket> upgrade(HttpRequest request, {Function protocolSelector(List<String> protocols)})
Could this maybe help me some how?
The protocols won't help you. These allow to agree on a special protocol that is used after the handshake for communication. But you can't modify the handshake and the exchanged fields themselves.
What you could do is make a complete own websocket implementation (directly based on Dart HTTP and TCP) that matches the the old implementation that PhantomJS uses. But that won't work with newer clients. By that way you also might be able to make an implementation that supports several versions (by checking the headers when you receive the handshake HTTP request and depending on the handshake forward to another implementation.
You would have to do at least your own WebSocketTransformer implementation. For this you could start by copying Darts interface and implementation and modify it on all places you need (check Licenses). If the actual WebSocket behavior after the handshake is compatible in the two RFCs you could reuse Darts WebSocket class. If this is not the case (other framing, etc.) then you would also have to do your own WebSocket class.
Some pseudo code based on yours:
HttpServer server;
server.listen((request) {
if (...) { // websocket condition
if (request.headers.value("Sec-WebSocket-Key1") != null) {
YourWebSocketTransformer.upgrade(request).then((websocket) {
... // websocket might need to be a different type than Dart's WebSocket
});
}
else {
WebSocketTransformer.upgrade(request).then((websocket) {
...
});
}
}
else {
// Do normal HTTP request processing.
}
});
I don't know your application, but it's probably not worth the effort. Bringing the old websocket implementation into Dart is probably the same effort as bringing the official implementation to PhantomJS. Therefore I think fixing PhantomJS should be preferred.
"No."
HttpRequest.headers is immutable, so you can't massage the request headers into a format that Dart is willing to accept. You can't do any Ruby-style monkey-patching, because Dart does not allow dynamic evaluation.
You can, should you choose a path of insanity, implement a compatible version of WebSockets by handling the raw HttpRequest yourself when you see a request coming in with the expected headers. I believe you can re-implement the WebSocket class if necessary. The source for the WebSocket is here.
Maybe it's possible to do that through inheritance. It's impossible in dart to avoid overriding.
If you have the time and you really need this, you can re-implement some method to patch the websocket for PhatomJS
class MyWebSocket extends WebSocket {
MyWebSocket(/* ... */) : super(/* ... */);
methodYouNeedToOverride(/* ... */) {
super.methodYouNeedToOverride(/* ... */)
// Your patch
}
}
This way will allow you to access to "protected" variable or method, may be useful for a patching
But be careful, WebSocket are just the visible part, all the implementation is in websocket_impl.dart
I am writing a bdd test for a component that will startup phantomjs and hit a specific route on my site and do processing on that. Because the component is fundamentally about automating a phantom instance there is no way to easily stub out the http requests.
So I want to stub out a self-hosted endpoint that will stub out the data I'm after. Because this is a unit test I think its really important for it to run in isolation so I do something like this:
async Task can_render_html_for_slide_async() {
var config = new HttpSelfHostConfiguration("http://localhost:54331");
config.Routes.MapHttpRoute("Controller", "{controller}", new {});
using (var server = new HttpSelfHostServer(config)) {
server.OpenAsync().Wait();
var client = new HttpClient();
var resp = await client.GetStringAsync(config.BaseAddress+"/Stub");
Console.WriteLine(resp);
}
}
public class StubController : ApiController
{
public string Get() {
return "Booyah";
}
}
Which gets me
AddressAccessDeniedException : HTTP could not register URL http://+:54331/
I understand that netsh or Admin mode is required for this but I don't understand why. Nodejs for example runs perfectly fine on windows but has no such requirement.
Also using OWIN directly needs no netsh-ing. So....what's going on?
I wrote an article about it on codeproject, it was done to make it possible for multiple application to share the same port.
You can have both, IIS and Apache (or OWIN in your case) listenening port 80. The routing to the right application is done thanks to the path of the url.
IIS and Apache both would use this driver (http.sys). But you need permission to "reserve" a path.
Administrators are always authorized. For other users, use netsh or my GUI tool HttpSysManager to set ACL.
Any method that requires giving permission via netsh uses a Windows kernel driver to provide http access.
If a library opens a socket itself and handles the http communication that way, no netsh use is needed.
So to answer your question, some methods are using the kernel driver and some are handling the protocol themselves.