Chrome redirects .dev to https - laravel

Suddenly Google Chrome redirects my virtual-host domain myapplication.dev to https://myapplication.dev. I already tried to go to
chrome://net-internals/#hsts
And enter myapplication.dev into the textbox at the very bottom "Delete domain security policies" but this had no effect.
I also tried to delete the browser data.
What I also did is to change the v-host to .app instead of .dev but Chrome still redirected me to https:// ...
It's a Laravel application running on Laragon.
On other PCs in the same network, it works perfectly.

There is no way to prevent Chrome (>= 63) form using https on .dev domain names.
Google now owns the official .dev tld and has already stated that they will not remove this functionality.
The recommendation is to use another tld for development purposes, such as .localhost or .test.
More information about this update can be found in this article by Mattias Geniar.

For Firefox:
you can disable the property network.stricttransportsecurity.preloadlist by visiting the address : about:config .
For IE it seems to be still working .
For Chrome, there is no solution, I think it's hardcoded in the source code.
See that article : How to prevent Firefox and Chrome from forcing dev and foo domains to use https

This problem can't be fixed. Below is the reason:
Google owns .dev gTLD
Chrome forces HTTP to HTTPS on .dev domain directly within the source code.
From the 2nd link below:
...
// eTLDs
// At the moment, this only includes Google-owned gTLDs,
// but other gTLDs and eTLDs are welcome to preload if they are interested.
{ "name": "google", "include_subdomains": true, "mode": "force-https", "pins": "google" },
{ "name": "dev", "include_subdomains": true, "mode": "force-https" },
{ "name": "foo", "include_subdomains": true, "mode": "force-https" },
{ "name": "page", "include_subdomains": true, "mode": "force-https" },
{ "name": "app", "include_subdomains": true, "mode": "force-https" },
{ "name": "chrome", "include_subdomains": true, "mode": "force-https" },
...
References
ICANN Wiki Google
Chromium Source - transport_security_state_static.json

Check that link
https://laravel-news.com/chrome-63-now-forces-dev-domains-https
Based on this article by Danny Wahl he recommends you use one of the following: “.localhost”, “.invalid”, “.test”, or “.example”.

Chrome 63 forces .dev domains to HTTPS via preloaded HSTS
and soon all other browsers will follow.
.dev gTLD has been bought by Google for internal use and can not be used anymore with http, only https is allowed. See this article for further explanations:
https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/

May be worth noticing that there are other TLD that are forced to https: https://chromium.googlesource.com/chromium/src.git/+/63.0.3239.118/net/http/transport_security_state_static.json#262
google, dev, foo, page, app and chrome right now.

MacOS Sierra, Apache: After Chrome 63 forces .dev top level domains to HTTPS via preloaded HSTS phpmyadmin on my mac stop works. I read this and just edit /etc/apache2/extra/http-vhosts.conf file:
<VirtualHost *:80>
DocumentRoot "/Users/.../phpMyAdmin-x.y.z"
ServerName phpmyadmin.localhost
</VirtualHost>
and restart apache (by sudo /usr/sbin/apachectl stop; sudo /usr/sbin/apachectl start ) - and now it works on http://phpmyadmin.localhost :) . For laravel applications solution is similar.
The nice thing is that using *.localhost top level domain when you set up new project you can forget about editing /etc/hosts.
How cool is that? :)
There's also an excellent proposal to add the .localhost domain as a
new standard, which would be more appropriate here.
UPDATE 2018
Using *.localhost is not good - some applications will not support it like cURL (used by php-guzzle) - more details here. Better is to use *.local.

Related

Laravel 9 (Vite) shared on local network on https

I am building a web app that uses (mobile devices's) camera, but this is working only on https and localhost.
The web app is served locally using WAMP 3.2.9.
I've managed to use the secure protocol (https) within my wamp configuration, but I'm having problems when I want to share my app to my local network so I can view the app on my phone and test the camera functionality.
In the older versions of Laravel (which used webpack) this was very easy using browsersync, but now, using Vite I don't know exactly how to do this.
My local domain is myapp.test and can be accessed using both http and https.
I tried to use npm run vite --host, which shows the local and network address as well (ex. 192.168..), but when I visit that address on my phone, I can see only the Vite default page This is the Vite development server that provides Hot Module Replacement for your Laravel application., but not the app itself.
In my vite.config.js file I added that ip from vite network:
server: {
https: true,
host: '192.168._._'
},
plugins: [
laravel({
input: [
'resources/css/app.css',
'resources/js/app.js',
],
refresh: [
...refreshPaths,
'app/Http/Livewire/**',
],
}),
mkcert()
],
Note that I also used the mkcert vite plugin to allow me to use https.
Now I'm confused about the vite service that runs on port 5173 by default and the app that should run on port 443 to be on https.
I've also tried using `php artisan serve --host 192.168.. which works on my local network, but it doesn't work with https, so I had to focus on WAMP only.
So how can I have my app shared among my local network with https?
I'll explain about how Vite works compared to Webpack to hopefully help you understand a little better.
Both Webpack and Vite create a bundle of files when using the build commands to compile for production. Using the dev command, that it seems like you're using, they work a little differently. While Webpack watches for file changes to recompile the bundle and BrowserSync then reloads your assets for you, Vite starts a local server to serve the compiled files. This means that you don't proxy your original domain like with BrowserSync. Vite also creates a file in your public folder called "hot", which tells Laravel which url it should use when using the #vite() directive or the Vite::asset() method. Because of that you can use your original domain myapp.test even for the hot reloading of the dev command. I don't think Laravel actually supports --host and if it doesn't I haven't been able to find it or figure it out.
I did find https://github.com/Applelo/vite-plugin-browser-sync to hopefully solve your testing on other devices but I couldn't get it to work with https, otherwise I'm afraid you might have to look into something like ngrok and use the npm run build command instead of dev until better support is built into Laravel.
Update:
To configure the BrowserSync plugin you have to manually configure the proxy:
VitePluginBrowserSync({
bs: {
proxy: 'http://myapp.test/' // The usual access URL
}
})
Since it doesn't seem like Laravel supports --host I have been able to find a workaround: because Laravel reads the asset host URL from the hot file in the public directory, you can replace the contents with the external Vite URL like http://192.168.1.37:5174 after running npm run dev --host. This will make Laravel use that URL when referencing any assets.

Apply Lets Encrypt SSL for CNAME URL Laravel Forge

Situation:
I've created a website that allows users to create their own simple sub-sites. Initially these are on sub-domains (i.e. newsite.websitecreator.com).These have SSL applied to them via a wildcard certificate *.websitecreator.com. All works fine!
I've also created a means of users being able to purchase a custom domain via an API or route their own domain to point to their subdomain. To achieve this a CNAME is created and pointed to the subdomain. This is routing and fine using an include line in the nginx config which includes all the custom domains:
include /home/forge/websitecreator.com/public/content/websitecreator-customer-domains.conf;
Issue
The main issue is the application of SSL to the custom domains. Obviously the SSL needs to be installed on the server, which has been tried through Forge's LetsEncrypt SSL option within the dashboard, with a view to using the Forge API for future LetsEncrypt when this is automated. However, this is giving me the following error:
Cloning into 'letsencrypt15721234230'...
ERROR: Challenge is invalid! (returned: invalid) (result: {
"type": "http-01",
"status": "invalid",
"error": {
"type": "urn:ietf:params:acme:error:unauthorized",
"detail": "Invalid response from http://newcustomdomain.co.uk/.well-known/acme-challenge/Da6CtvOTJnQVHQyENujDSih81TKuejKuaAWCWXsJKus [88.123.456.9]: \"\u003c!DOCTYPE HTML PUBLIC \\\"-//W3C//DTD HTML 4.01 Frameset//EN\\\" \\\"http://www.w3.org/TR/html4/frameset.dtd\\\"\u003e\u003chtml\u003e\u003chead\u003e\u003cmeta http-eq\"",
"status": 403
},
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/995722342/n7xg9g",
"token": "Da6CtvOTJnQVhh2h3hn2jbSih81TKuejKuaAWCWXsJKus",
"validationRecord": [
{
"url": "http://newcustomdomain.co.uk/.well-known/acme-challenge/Da6CtvOTJnQVHQyENujDSih81TKuejKuaAWCWXsJKus",
"hostname": "newcustomdomain.co.uk",
"port": "80",
"addressesResolved": [
"88.123.123.9"
],
"addressUsed": "66.343.234.9"
}
]
})
Status code 403 tells me that this is unauthorised for some reason.
Question
Despite the approaches tried above, my question is to the SO community is, given the current setup (Forge, Laravel, nginx etc) how would you approach this and any sample code / examples would be greatly appreciated.

Firefox content script not loading in some pages

Context
I am currently working on a browser extension which is working as expected with Chrome and Opera but I am facing issues with Firefox. Here is a minimal version of manifest.json needed to reproduce the problem:
{
"name": "Example",
"version": "0.0.1",
"author": "Pyves",
"content_scripts": [
{
"all_frames": true,
"matches": [
"<all_urls>"
],
"js": [
"content.js"
]
}
],
"manifest_version": 2
}
And here is the related content.js:
console.log("Content script loaded");
Issue
Content script loaded is systematically logged regardless of the visited page when using Chrome and Opera. Nevertheless, the content script doesn't seem to load in some pages when using Firefox, for instance raw GitHub pages such as the following:
https://raw.githubusercontent.com/badges/shields/master/README.md
There are no error messages in the Firefox console stating why the content script was not executed on that particular page.
Questions
Why is the Firefox extension unable to load the content script into some pages?
What changes need to be made so that the extension works consistently on all browsers?
I finally figured out why the extension's content script is not loading in some pages when using Firefox.
After analysing the requests with the Network developer tools, it turns out that the following headers are returned when getting GitHub raw pages:
Content-Security-Policy: default-src 'none'; style-src 'unsafe-inline'; sandbox
According to the MDN Web Docs, the sandbox CSP directive has the following effect:
enables a sandbox for the requested resource [...]. It applies
restrictions to a page's actions including preventing popups,
preventing the execution of plugins and scripts, and enforcing a
same-origin policy.
Therefore Firefox is preventing extensions from executing content scripts in pages with the sandbox CSP, whereas other browsers such as Chrome and Opera do allow this behaviour. Related bug reports in Mozilla's Bugzilla (1267027 and 1411641) highlight that:
CSP 'sandbox' directive prevents content scripts from matching, due to unique origin
This issue has been acknowledged and will hopefully be fixed in future releases of Firefox.

How can I get my Hipchat Integration on Heroku to authenticate?

I followed the step-by-step guide here.
I made a simple app that posts a message to the rooms the Integration is installed on per a regex (as described in the tutorial above).
When I initially add the Integration to a hipchat room, it works fine. However, after a period of time it stops working.
The following error appears in my Heroku logs:
JWT verification error: 400 Request can't be verified without an OAuth secret
I assume something with my configuration is wrong or my lack-of-use-of-OAuth, but after googling around I can't find any specific answers on what it should look like.
My config.json looks like this:
"production": {
"usePublicKey": true,
"port": "$PORT",
"store": {
"adapter": "jugglingdb",
"type": "sqlite3",
"database": "store.db"
},
"whitelist": [
"*.hipchat.com"
]
},
And my request handler looks like this:
app.post('/foo',
addon.authenticate(),
function (req, res) {
hipchat.sendMessage(req.clientInfo, req.identity.roomId, 'bar')
.then(function (data) {
res.sendStatus(200);
});
}
);
Any specific direction on configuration and use of Oauth for Hipchat and Heroku would be amazing!
I personally haven't used the jugglingdb adapter with Heroku and don't know if you can actually look into the database, but it seems like somewhere along the way clientInfo disappears from the store.
My suggestion is to start testing locally with ngrok and redis, so that you can troubleshoot locally and then push the working code to Heroku.
Three things I needed to do in order to fix my problem:
Install the Heroku Redis add-on for my Heroku App. (confirm that the Environment Variable for ($REDIS_URL) was added to your app settings).
Add this line to my app.js file:
ac.store.register('redis', require('atlassian-connect-express-redis'));
Change the production.store object in the config.json to be the following:
"store": {
"adapter": "redis",
"url": "$REDIS_URL"
},

Firefox WebExtensions and Cross-domain privileges

I am trying to port a chrome extension to firefox using the relatively new WebExtensions from Firefox.
I always getting the following error
Cross-Origin Request Blocked:
The Same Origin Policy disallows reading the remote resource at .... (Reason: CORS header 'Access-Control-Allow-Origin' missing)
I added the website i would like to access to the permissions section inside the manifest.json like explained on the website, and also on Google Chrome its working.
Normally it should work that way, at least its explained that way on https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Content_scripts#Cross-domain_privileges
I would be very thankful for any help since I am out of ideas.
manifest.json
{
...
"permissions": [
"<all_urls>"
]
}
I think you need to add a CSP header to your HTML page. http://content-security-policy.com/ I had to add one to get mine to work with a similar warning.

Resources