NodeMCU captive portal webserver responds to HTTP, but not HTTPS - https

I am setting up a captive portal similar based on this. My aim is to have anyone who connects be redirected and served the index.html page stored in the ESP8266's filesystem, from which they can navigate to other pages similarly stored. The code distinguishes between foreign sites and local sites by looking up the url in a text file named "urls.txt". Everything works fine, provided the user attempts to visit a pure-http site, but the user is not redirected when attempting to visit a HTTPS site. For example, attempting to connect to "www.google.com" would fail, but "www.nerfhaven.com" would succeed.
Here's some code from server.lua:
srv=net.createServer(net.TCP)
srv:listen(80,function(conn)
local rnrn=0
local Status = 0
local DataToGet = 0
local method=""
local url=""
local vars=""
conn:on("receive",function(conn,payload)
if Status==0 then
_, _, method, url, vars = string.find(payload, "([A-Z]+) /([^?]*)%??(.*) HTTP")
-- print(method, url, vars)
end
[...]
conn:send("HTTP/1.1 200 OK\r\n\r\n")
[...]
local foundmatch = 0
file.open("urls.txt", "r")
print("potato")
for i = 108,1,-1 do
line = file.readline()
--print(line)
if string.match(line, url) then
foundmatch=1
print("found " .. url)
end
end
print("potato2")
file.close()
[...]
conn:on("sent",function(conn)
print("sending data")
if DataToGet>=0 and method=="GET" then
if file.open(url, "r") then
file.seek("set", DataToGet)
local line=file.read(512)
file.close()
if line then
conn:send(line)
-- print ("sending:" .. DataToGet)
DataToGet = DataToGet + 512
if (string.len(line)==512) then
return
end
end
end
end
conn:close()
end)
end)
I would think this should work, as I see no way to discriminate between HTTP and HTTPS websites, and any of those should be simply chopped up and replaced with a local version (either index.html or something in urls.txt). Instead, it seems to send no response at all.

The code you shared only listens on port 80 - the HTTP port. It wouldn't be able to respond to HTTPS requests because HTTPS uses port 443.
So first, you'll need to listen on port 443 in addition to port 80.
Once you get a connection open on port 443 you'll need to run TLS (Transport Layer Security, the 'S' in 'HTTPS') and negotiate a secure connection before you can start handling HTTP over the secure connection.
NodeMCU does have a TLS library but it appears to only operate as a client, not a server, so unless you can find someone else who's done this you're on your own here, and it's a big project.
Assuming you get that working, any browser that connects to your "captive portal" is going to throw SSL certificate errors left and right because your server is doing exactly what TLS is designed to prevent - impersonating another web site. You won't have the certificates to prove you're www.google.com so the browser will strongly advise the user that something bad is happening and they shouldn't proceed.
Fundamentally and first, though, the reason you're not getting any answer for HTTPS is that you're not listening on the HTTPS port.

Related

Azure WAF Rewrite rules for updating port numbers

I have a server in Azure running two web apps, one on port 443 (IIS), another on 1024 (Apache). Both are https. I have an Azure Application Gateway (WAF v2) in place. I would like to allow requests for subdomain1.domain.com to go through on 443 (which is set-up and working) and requests for subdomain2.domain.com to be re-written to port 1024 internally.
I have tried various combinations of conditions and actions, but cannot get anything to do anything at all, good bad or indifferent!
My current Condition is as follows
Type of variable to check: HTTP Header
Header type: Response Header
Header name: Common Header
Common header: Location
Case-sensitive: No
Operator: =
Pattern to match: (https?):\/\/.*subdomain2.domain.com(.*)$
My current action is:
Re-write type: Response Header
Action type: Set
Header name: Common header
Common header: Location
Header value: https://backendservername.domain.com:1024{http_resp_Location_2}
I can't find a combination that does anything at all, nor any examples that show port updates. I've tried using request headers and the host value, but unfortunately that conflicts with the host rewrite in the HTTP Settings that was necessary to get any end to end SSL working.
Thanks in advance.
Matt.

localhost refuses to connect - ERR_CONNECTION_REFUSED

I have a simple MVC web application where javascript code sends ajax requests to the controller and the controller sends back responses.
I built the app 2 years ago and everything used to work fine. Now I tried to run the app again locally and met with the following problem:
whenever an Ajax request is sent from the frontend to the controller (running on localhost), the localhost refuses to connect and I get an ERR_CONNECTION_REFUSED message in (chrome's) javascript-console. (In Safari's javascript-console I get the following error message: "Failed to load resource: Could not connect to the server.")
I'm running the app using NetBeans 11.2. My NetBeans IDE uses GlassFish as server:
I removed the Glassfish server from NetBeans IDE, deleted its folder in my home directory and then added the Glassfish server again in my NetBeans IDE (which also entailed downloading the the newest version of the Glassfish server).
Still, the server refuses to accept any requests from the frontend.
I also tried using Payara Server (version 5.193). That didn't make a difference either.
The frontend itself looks fine at first glance by the way. That is, going to http://localhost:8080/myapp loads the frontend of the app. However, any dynamic features of the app don't work because the server refuses to accept any Ajax requests coming from the frontend (and initiated through mouse clicks).
How can I fix this?
I think I found the reason for the problem:
In my javascript-file I have the following line of code:
var url = "http://localhost:8080/myapp/Controller";
The variable "url" is passed to all the AJAX requests sent to localhost.
But here is the crazy thing: the AJAX requests are not sent to "http://localhost:8080/myapp/Controller" but to "http://localhost:8081/myapp/Controller" !!!!!
What the hell is going on here?!
Did you use port 8081 before and then changed the variable "url" to the new port 8080? In this case, maybe the variable is still set to the old value in the cache. Restart your computer and see whether this fixes the problem.
If the value of the attribute http-listener is localhost, it will refuse the connection external connection.
You can verify using its value using the command
asadmin> get server-config.network-config.network-listeners.network-listener.http-listener-1.*
Information similar to the following should returned:
server.http-service.http-listener.http-listener-1.acceptor-threads = 1
server.http-service.http-listener.http-listener-1.address = 0.0.0.0
server.http-service.http-listener.http-listener-1.blocking-enabled = false
server.http-service.http-listener.http-listener-1.default-virtual-server = server
server.http-service.http-listener.http-listener-1.enabled = true
server.http-service.http-listener.http-listener-1.external-port =
server.http-service.http-listener.http-listener-1.family = inet
server.http-service.http-listener.http-listener-1.id = http-listener-1
server.http-service.http-listener.http-listener-1.port = 8080
server.http-service.http-listener.http-listener-1.redirect-port =
server.http-service.http-listener.http-listener-1.security-enabled = false
server.http-service.http-listener.http-listener-1.server-name =
server.http-service.http-listener.http-listener-1.xpowered-by = true
Modify an attribute by using the set subcommand.
This example sets the address attribute of http-listener-1 to 0.0.0.0:
asadmin> set server.http-service.http-listener.http-listener-1.address = 0.0.0.0
Reference:
https://docs.oracle.com/cd/E19798-01/821-1751/ablaq/index.html

Nginx will not stop rewriting

I am attempting to configure an owncloud server that rewrites all incoming requests and ships them back out at the exact same domain and request uri but change the scheme from http to https.
This is failed miserably. I tried:
redirect 301 https://$hostname/$request_uri
and
rewrite ^ https://$hostname/$request_uri
Anyway, after removing that just to make sure the basic nginx configuration would work it as it had prior to adding the ssl redirects/rewrites it will NOT stop changing the scheme to https.
Is there a cached list somewhere in nginx's configuration that keeps hold of redirect/rewrite protocols? I cleared my browser cache completely and it will not stop.
AH HA!
in config/config.php there was a line
'forcessl' => true,
Stupid line got switched on when it received a request at the 443 port.
Turned off and standard http owncloud works and neither apache/nginx are redirecting to ssl.
Phew.

Kohana sessions with Https

I have a a main controller that checks if session security is set and If not it will redirect to secure controller.
The problem is the secure controller is through https, it checks password and sets the session and redirects to main controller. I cant access the session set through https in http.
How do I use https and redirect to normal http then? I need the sessions in http and https
Anyideas?
EDIT
OK, I checked around and it is not realy possible without keeping things secure.
One option is to have the session sent over GET but its obviously insecure, So what if after checking login I redirect them to a https form that posts the session to a normal http page, at the html page i check the headers and make sure it came from my https page.
Does that sound secure to you??
Make your entire app HTTPS. Attached is the block I use for almost all my sites, redirecting everything on 80 to 443.
##
# Sample config for a site needing SSL
#
$HTTP["host"] =~ "^ssl_example.localhost(:[0-9]+)?$" {
# Serve everything over HTTPS
$SERVER["socket"] == ":80" {
url.redirect-code = 301
url.redirect = ( "/(.*)" => "https://ssl_example.localhost:%1/$1" )
}
# This would be a great place to test SSL config for QA/Dev zones...
$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.pemfile = rootdir + "/etc/ssl/example.pem"
server.document-root = "/home/crosslight/ssl_example/default"
accesslog.filename = rootdir + "/var/log/ssl_example.log"
server.errorlog = rootdir + "/var/log/ssl_example.error.log"
server.tag = "Crosslight (lighttpd-1.4.28 + php-5.3.3) / SSL"
// CodeIgniter/Kohana rewrite rules
url.rewrite-once = (
"^/index\.php" => "/index.php", #truncate index.php?params to just index.php
#### Serve static content
"^/(static.*)" => "/$1",
"^/(blog.*)" => "/$1",
"^/(.*)$" => "/index.php/$1"
)
}
}
Why not make your whole page part of the secure site (https)? I don't see a way to do the http => https conversion without passing the GET key between sites.
I don't think it's such a huge security concern if your logic is to take the session id (only the key) from a cookie on the http side of the house and make it a cookie (manually) on the https side of the house. You can then perform your normal session handling validation to see if it is a valid session. Isolates your logic from abuse as much as possible.
In the backend, you can share the session information between the sites or you can trigger a copy from the http to https based on the session id retrieved. Depends on your architecture.

How can I use local resources on a server?

How can I use local resources like css, js, png, etc. within a dynamically rendered page using webrick? In other words, how are things like Ruby on Rails linking made to work? I suppose this is one of the most basic things, and there should be a simple way to do it.
Possible Solution
I managed to do what I wanted using two servlets as follows:
require 'webrick'
class WEBrick::HTTPServlet::AbstractServlet
def do_GET request, response
response.body = '<html>
<head><base href="http://localhost:2000"/></head>
<body><img src="path/image.png" /></body>
</html>'
end
end
s1 = WEBrick::HTTPServer.new(Port: 2000, BindAddress: "localhost")
s2 = WEBrick::HTTPServer.new(Port: 3000, BindAddress: "localhost")
%w[INT TERM].each{|signal| trap(signal){s1.stop}}
%w[INT TERM].each{|signal| trap(signal){s2.shutdown}}
s1.mount("/", WEBrick::HTTPServlet::FileHandler, '/')
s2.mount("/", WEBrick::HTTPServlet::AbstractServlet)
Thread.new{s1.start}
s2.start
Is this the right way to do it? I do not feel so. In addition, I am not completely satisfied with it. For one thing, I do not like the fact that I have to specify http://localhost:2000 in the body. Another is the use of thread does not seem right. Is there a better way to do this? If you think this is the right way, please answer so.
Generally speaking, because of security concerns browsers likely won't link to local files (using file:// schema) from an internet site (using http:// or https:// schema). See Can Google Chrome open local links?. This is unrelated to any server side technology.
Outside of that, it seems your server is working perfectly. You've made it so it responds to all requests with a HTML page containing a link to /. When you click on that link, something does indeed happen; a request is sent and you are served the same page again.
It kind of sounds like you want to expose your entire filesystem via HTTP. If that is what you're trying to accomplish, you can simply get away with not mounting a servlet:
server = WEBrick::HTTPServer.new(Port: 3000, BindAddress: "localhost", DocumentRoot: "/")
%w[INT TERM].each{|signal| trap(signal){server.shutdown}}
server.start
Try code like this:
require 'webrick'
class WEBrick::HTTPServlet::AbstractServlet
def do_GET request, response
if request.unparsed_uri == "/"
response.body = '<html><body>test</body></html>'
end
end
end
server = WEBrick::HTTPServer.new(Port: 3000, BindAddress: "localhost", DocumentRoot: "/")
%w[INT TERM].each { |signal| trap(signal) { server.shutdown } }
server.mount("/", WEBrick::HTTPServlet::AbstractServlet)
server.start
This works for me, I'm not sure why but it seems to work whenever I call at least one method on the request object.
It sounds like you are confusing web pages that are served vs. pages that are opened by the browser directly from your drive, and how file: differs from http:, https:, and ftp:.
file: is a locally available resource when a page is directly opened from the drive. The others are remotely available resources when a page is served from a httpd host.
The browser can't tell that a page from a server came from your drive; It only knows it got it from a server, somewhere, and doesn't know or care whether that server is on the same hardware. Browsers will not allow access to local resources from remotely retrieved pages. That was an exploit that was closed years ago.
See RFC 1738's specification 3.10 FILES for file: URLs for the official statements.
I finally found out that I can mount multiple servlets on a single server. It took a long time until I found such example.
require 'webrick'
class WEBrick::HTTPServlet::AbstractServlet
def do_GET request, response
response.body = '<html>
<head><base href="/resource/"/></head>
<body>
<img src="path_to_image/image.png";alt="picture"/>
<a href="path_to_directory/" />link</a>
...
</body>
</html>'
end
end
server = WEBrick::HTTPServer.new(Port: 3000, BindAddress: "localhost")
%w[INT TERM].each{|signal| trap(signal){server.shutdown}}
server.mount("/resource/", WEBrick::HTTPServlet::FileHandler, '/')
server.mount("/", WEBrick::HTTPServlet::AbstractServlet)
server.start
The path /resource/ can be anything else. The link will now correctly redirect to the expected directory, showing that there is no access permission, which indicates that things are working right; it's now just a matter of permission.

Resources