Kohana sessions with Https - session

I have a a main controller that checks if session security is set and If not it will redirect to secure controller.
The problem is the secure controller is through https, it checks password and sets the session and redirects to main controller. I cant access the session set through https in http.
How do I use https and redirect to normal http then? I need the sessions in http and https
Anyideas?
EDIT
OK, I checked around and it is not realy possible without keeping things secure.
One option is to have the session sent over GET but its obviously insecure, So what if after checking login I redirect them to a https form that posts the session to a normal http page, at the html page i check the headers and make sure it came from my https page.
Does that sound secure to you??

Make your entire app HTTPS. Attached is the block I use for almost all my sites, redirecting everything on 80 to 443.
##
# Sample config for a site needing SSL
#
$HTTP["host"] =~ "^ssl_example.localhost(:[0-9]+)?$" {
# Serve everything over HTTPS
$SERVER["socket"] == ":80" {
url.redirect-code = 301
url.redirect = ( "/(.*)" => "https://ssl_example.localhost:%1/$1" )
}
# This would be a great place to test SSL config for QA/Dev zones...
$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.pemfile = rootdir + "/etc/ssl/example.pem"
server.document-root = "/home/crosslight/ssl_example/default"
accesslog.filename = rootdir + "/var/log/ssl_example.log"
server.errorlog = rootdir + "/var/log/ssl_example.error.log"
server.tag = "Crosslight (lighttpd-1.4.28 + php-5.3.3) / SSL"
// CodeIgniter/Kohana rewrite rules
url.rewrite-once = (
"^/index\.php" => "/index.php", #truncate index.php?params to just index.php
#### Serve static content
"^/(static.*)" => "/$1",
"^/(blog.*)" => "/$1",
"^/(.*)$" => "/index.php/$1"
)
}
}

Why not make your whole page part of the secure site (https)? I don't see a way to do the http => https conversion without passing the GET key between sites.
I don't think it's such a huge security concern if your logic is to take the session id (only the key) from a cookie on the http side of the house and make it a cookie (manually) on the https side of the house. You can then perform your normal session handling validation to see if it is a valid session. Isolates your logic from abuse as much as possible.
In the backend, you can share the session information between the sites or you can trigger a copy from the http to https based on the session id retrieved. Depends on your architecture.

Related

Create a multi-website proxy with `http-proxy`

I'm using node-http-proxy to run a proxy website. I would like to proxy any target website that the user chooses, similarly to what's done by https://www.proxysite.com/, https://www.croxyproxy.com/ or https://hide.me/en/proxy.
How would one achieve this with node-http-proxy?
Idea #1: use a ?target= query param.
My first naive idea was to add a query param to the proxy, so that the proxy can read it and redirect.
Code-wise, it would more or less look like (assuming we're deploy this to http://myproxy.com):
const BASE_URL = 'https://myproxy.com';
// handler is the unique handler of all routes.
async function handler(
req: NextApiRequest,
res: NextApiResponse
): Promise<void> {
try {
const url = new URL(req.url, BASE_URL); // For example: `https://myproxy.com?target=https://google.com`
const targetURLStr = url.searchParams.get('target'); // Get `?target=` query param.
return httpProxyMiddleware(req, res, {
changeOrigin: true,
target: targetURLStr,
});
} catch (err) {
res.status(500).json({ error: (err as Error).message });
}
}
Problem: If I deploy this code to myproxy.com, and load https://myproxy.com?target=https://google.com, then google.com is loaded, but:
if I click a link to google images, it loads https://myproxy.com/images instead of https://myproxy.com?target=https://google.com/images, also see URL as query param in proxy, how to navigate?
Idea #2: use cookies
Second idea is to read the ?target= query param like above, store its hostname in a cookie, and proxy all resources to the cookie's hostname.
So for example user wants to access https://google.com/a/b?c=d via the proxy. The flow is:
go to https://myproxy.com?target=${encodeURIComponent('https://google.com/a/b?c=d')}
proxy reads the ?target= query param, sets the hostname (https://google.com) in a cookie
proxy redirects to https://myproxy.com/a/b?c=d (307 redirect)
proxy sees a new request, and since the cookie is set, we proxy this request into node-http-proxy using cookie's target.
Code-wise, it would look like: https://gist.github.com/throwaway34241/de8a623c1925ce0acd9d75ff10746275
Problem: This works very well. But only for one proxy at a time. If I open one browser tab with https://myproxy.com?target=https://google.com, and another tab with https://myproxy.com?target=https://facebook.com, then:
first it'll set the cookie to https://google.com, and i can navigate in the 1st tab correctly
then I go to the 2nd tab (without closing the 1st one), it'll set the cookie to https://facebook.com, and I can navigate facebook on the 2nd tab correctly
but then if I go back to the first tab, it'll proxy google resources through facebook, because the cookie has been overwritten.
I'm a bit out of ideas, and am wondering how those generic proxy websites are doing. Ideally, I would not want to parse the HTML of the target website.
The idea of a Proxy is to intercept the client requests, either by ports or by backend APIs, extract the URLs of requested resources, modify them and make those requests by self from servers, and modify responses and send them back to the client.
your first approach does this except modify responses and send back modified responses.
one way to do this is to edit all links in resources return by proxy to have your web address in them, only then send them as responses back to the client.
another way is to wrap the target site in a frame, as most web proxy sites do, and have a script to crawl the page and replace all links.
there is a small problem though. javascript-based requests are mostly hardcoded in the script and it is not an easy job to replace them.
your seconds approach sounds as if it would work better, but just a sound, nothing concrete I can say. implement a tab activity checker so you can change the cookie to your active tab. please check how-to-tell-if-browser-tab-is-active discussion about that

NodeMCU captive portal webserver responds to HTTP, but not HTTPS

I am setting up a captive portal similar based on this. My aim is to have anyone who connects be redirected and served the index.html page stored in the ESP8266's filesystem, from which they can navigate to other pages similarly stored. The code distinguishes between foreign sites and local sites by looking up the url in a text file named "urls.txt". Everything works fine, provided the user attempts to visit a pure-http site, but the user is not redirected when attempting to visit a HTTPS site. For example, attempting to connect to "www.google.com" would fail, but "www.nerfhaven.com" would succeed.
Here's some code from server.lua:
srv=net.createServer(net.TCP)
srv:listen(80,function(conn)
local rnrn=0
local Status = 0
local DataToGet = 0
local method=""
local url=""
local vars=""
conn:on("receive",function(conn,payload)
if Status==0 then
_, _, method, url, vars = string.find(payload, "([A-Z]+) /([^?]*)%??(.*) HTTP")
-- print(method, url, vars)
end
[...]
conn:send("HTTP/1.1 200 OK\r\n\r\n")
[...]
local foundmatch = 0
file.open("urls.txt", "r")
print("potato")
for i = 108,1,-1 do
line = file.readline()
--print(line)
if string.match(line, url) then
foundmatch=1
print("found " .. url)
end
end
print("potato2")
file.close()
[...]
conn:on("sent",function(conn)
print("sending data")
if DataToGet>=0 and method=="GET" then
if file.open(url, "r") then
file.seek("set", DataToGet)
local line=file.read(512)
file.close()
if line then
conn:send(line)
-- print ("sending:" .. DataToGet)
DataToGet = DataToGet + 512
if (string.len(line)==512) then
return
end
end
end
end
conn:close()
end)
end)
I would think this should work, as I see no way to discriminate between HTTP and HTTPS websites, and any of those should be simply chopped up and replaced with a local version (either index.html or something in urls.txt). Instead, it seems to send no response at all.
The code you shared only listens on port 80 - the HTTP port. It wouldn't be able to respond to HTTPS requests because HTTPS uses port 443.
So first, you'll need to listen on port 443 in addition to port 80.
Once you get a connection open on port 443 you'll need to run TLS (Transport Layer Security, the 'S' in 'HTTPS') and negotiate a secure connection before you can start handling HTTP over the secure connection.
NodeMCU does have a TLS library but it appears to only operate as a client, not a server, so unless you can find someone else who's done this you're on your own here, and it's a big project.
Assuming you get that working, any browser that connects to your "captive portal" is going to throw SSL certificate errors left and right because your server is doing exactly what TLS is designed to prevent - impersonating another web site. You won't have the certificates to prove you're www.google.com so the browser will strongly advise the user that something bad is happening and they shouldn't proceed.
Fundamentally and first, though, the reason you're not getting any answer for HTTPS is that you're not listening on the HTTPS port.

Yii2 https URLs do not work

I try to run my local copy of my yii2 site with https.
I use this in config to force http url to https
'on beforeRequest' => function ($event) {
if(!Yii::$app->request->isSecureConnection){
$url = Yii::$app->request->getAbsoluteUrl();
$url = str_replace('http:', 'https:', $url);
Yii::$app->getResponse()->redirect($url);
Yii::$app->end();
}
},
The only url I can reach is the home page i.e. a bare url such as
example.ext
Other URLs give
Not Found The requested URL /site/index was not found on this server.
When removing the 'onbeforerequest' in the config, I can reach every http URL.
Question: why https URLs become unreachable?
Eventually I made out that there was no url rewrite for pretty url in the virtualhost litening to 443 port.
Adding the recommended rewrite rule in it solved the problem.
#stfsngue: Thank you for comment
Do you see any particular reason for preferring .htacces to 'onbeforeRequest' to force https?

$urlRouteProvider, simple routing

Can't get my head around $urlRouterPovider...
basically whenever I go to a link it should load associated view and controller. So that works.
$urlRouterProvider.when("/","/home")
$urlRouterProvider.otherwise("/error")
$stateProvider.state('views', {
url: "/:view",
templateUrl: function(stateParams, formResolver) {
return "views/" + stateParams.view + "/" + stateParams.view + "-view.html";
},
controllerProvider: function($stateParams) {
return "" + $stateParams.view + "Ctrl";
}
});
So whenever user goes to http://localhost:3030/#/foo, it loads "views/foo/foo.html" with controller as "fooCtrl", and goes to home by default, and for all other cases errror.
That is cool. What I need though, whenever user goes to http://localhost:3030/#/auth it would redirect to "/auth" on the server, skipping stateProvider. Currently it sees that as a state and tries to find corresponding view and controller.
If you need to redirect them to the server you need to leave out the #/ part of the URL.
The browser ignores the the #/ portion of the URL, which is how AngularJS is able to allow the page you server from localhost:3030/#/ handle the request. This is essentially still just requesting localhost:3030/
If you are wanting to do a true redirect or navigation to /auth on your server, ignore state for that request - you want your browser to make a straight-up HTTP request pointed directly at your server. Use /auth as the action in your form, or post to /auth from within your controller. When you are done on the server, redirect the user back to your Angular application.
Remember as well that you need to have some mechanism for your AngularJS application to know ehnIn our applications, we have the server set a cookie with a JWT token in it that is then used by the AngularJS application to retrieve the user information. This way the AngularJS application knows how to tell when a the user is really logged in (vs. a user going to a URL that represents a logged-in state).

Https (SSL) in Codeigniter not working properly

I have a CI site with several form using jquery .click function, when I was in http its worked well, when I change to https all the form click button cannot be fire, its happen in localhost and in web host as well, is that anything need to config to run CI in https?
please advise and thanks!
Solved:
I just remove the url from $config['base_url'], and now the issue is solved, but I wonder how come when running https couldn't set value on $config['base_url']? hope someone would clear my doubt.
Thanks guy for taking time to view my question.
Set your base_url protocol independent:
$config['base_domain'] = 'yourdomain.com'; // a new config item if you need to get your domain name
if (isset($_SERVER['HTTP_HOST']))
{
$protocol = ($_SERVER['SERVER_PORT'] == 443 ? 'https://' : 'http://');
$config['base_url'] = $protocol.$_SERVER['HTTP_HOST'];
$config['base_url'] .= str_replace(basename($_SERVER['SCRIPT_NAME']),"",$_SERVER['SCRIPT_NAME']);
}
else
{
$config['base_url'] = '';
}
I am guessing your ajax request fails because you are trying to access no-secure content from a secure site.
I had a similar issue. I simply changed the base URL from HTTP to HTTPS in the config file and it worked well for both protocols.
# Base URL in codeigniter with HTTP
$config['base_url'] = 'http://mysite.abc/';
# Base URL in codeigniter with HTTPS
$config['base_url'] = 'https://mysite.abc/';
Remember, when you change HTTP to HTTPS, the site should work well for both protocols but it doesn't work the other way around.

Resources