Situation:
I've created a website that allows users to create their own simple sub-sites. Initially these are on sub-domains (i.e. newsite.websitecreator.com).These have SSL applied to them via a wildcard certificate *.websitecreator.com. All works fine!
I've also created a means of users being able to purchase a custom domain via an API or route their own domain to point to their subdomain. To achieve this a CNAME is created and pointed to the subdomain. This is routing and fine using an include line in the nginx config which includes all the custom domains:
include /home/forge/websitecreator.com/public/content/websitecreator-customer-domains.conf;
Issue
The main issue is the application of SSL to the custom domains. Obviously the SSL needs to be installed on the server, which has been tried through Forge's LetsEncrypt SSL option within the dashboard, with a view to using the Forge API for future LetsEncrypt when this is automated. However, this is giving me the following error:
Cloning into 'letsencrypt15721234230'...
ERROR: Challenge is invalid! (returned: invalid) (result: {
"type": "http-01",
"status": "invalid",
"error": {
"type": "urn:ietf:params:acme:error:unauthorized",
"detail": "Invalid response from http://newcustomdomain.co.uk/.well-known/acme-challenge/Da6CtvOTJnQVHQyENujDSih81TKuejKuaAWCWXsJKus [88.123.456.9]: \"\u003c!DOCTYPE HTML PUBLIC \\\"-//W3C//DTD HTML 4.01 Frameset//EN\\\" \\\"http://www.w3.org/TR/html4/frameset.dtd\\\"\u003e\u003chtml\u003e\u003chead\u003e\u003cmeta http-eq\"",
"status": 403
},
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/995722342/n7xg9g",
"token": "Da6CtvOTJnQVhh2h3hn2jbSih81TKuejKuaAWCWXsJKus",
"validationRecord": [
{
"url": "http://newcustomdomain.co.uk/.well-known/acme-challenge/Da6CtvOTJnQVHQyENujDSih81TKuejKuaAWCWXsJKus",
"hostname": "newcustomdomain.co.uk",
"port": "80",
"addressesResolved": [
"88.123.123.9"
],
"addressUsed": "66.343.234.9"
}
]
})
Status code 403 tells me that this is unauthorised for some reason.
Question
Despite the approaches tried above, my question is to the SO community is, given the current setup (Forge, Laravel, nginx etc) how would you approach this and any sample code / examples would be greatly appreciated.
Related
I am using exchanger api (florianv/laravel-swap) for foreign currency exchange rates in my laravel Project.
my composer.json snipped for relevance :
"require": {
"php": "^7.1.3",
"florianv/laravel-swap": "^1.3",
},
While the api is giving values in my local environment, it is throwing an exception in production environment.
The chain resulted in 2 exception(s):
Exchanger\Exception\Exception: The maximum allowed API amount of
monthly API requests has been reached.
Exchanger\Exception\Exception: The currency is not supported or
Google changed the response format
The Error is pretty clear on itself. But before I upgrade the api plan, I thought I would try with another api key, I got a free api key from fixer.io and inserted it in config/swap.php file in my project.
/*Config/Swap.php*/
'services' => [
'fixer' => [
'access_key' => 'MY_ACCESS_KEY', // Your app id
],
'google' => true,
],
The error still persists.
Am i supposed to enter the key somewhere else? How is it working on my local environment and not in the production?
choose an other service instead google.
https://github.com/florianv/laravel-swap/issues/51
https://github.com/florianv/laravel-swap/issues/35
or you can use the latest version and follow the doc.
Hi Experts,
I am following https://open.sap.com/courses/s4h13/items/258qEhXx5kdG8b4SXMSJYp tutorial, after deploying the app I am getting 404 for my servlets in approuter application while same servets are giving me 'http 401' in 'address-manager' as expected.
has anyone done this successfully? if so then please guide me in the right direction.
I have gone through everything I could think of, but I can't get past this issue.
xs-app.json file content
{
"welcomeFile": "index.html",
"routes": [
{
"source": "^/api/(.*)",
"target": "/api/$1",
"destination": "app-destination"
},
{
"source": "^/address-manager/(.*)",
"target": "/address-manager/$1",
"destination": "app-destination"
}],
"logout" : {
"logoutEndpoint": "/logout",
"logoutPage": "/logout.html"
}
}
The destinations environment variable of the approuter on SAP Cloud Platform, Cloud Foundry needs to reference the URL(s) at which you reach the application(s) that you want to access via the route(s) defined in the approuter. (Not to be confused with the destinations environment variable that you may be using as a placeholder in athe backend application built with the SAP S/4HANA Cloud SDK.)
In your case, this should probably be some URL pointing to the address-manager, your target application. In the example value mentioned in your comment, you point to the mock server instead, which is probably not what you want.
Change the destinations environment variable to the following and push / restart the application again. (Insert the URL that points to your address manager application deployment.)
[{"name":"app-destination", "url" :"address-manager-<random text>.cfapps.eu10.hana.ondemand.com/", "forwardAuthToken": true}]
The fact that you can login and logout despite the misconfigured destination is expected, because those paths are actually served by the approuter itself.
I was in the proccess of extending the tutorial mentioned in Step 9, with a NodeJS micro service. However I am having some strange issue with the comunication to the backend.
The flow I have is an App Router that directs to an HTML5 micro service (static buildpack) and this consumes either a Java or NodeJS microservice. The Java part works fine along with authentication scopes, but for NodeJS I am always getting 404 (not found) error when I call the respective path /node/hello (hello should return a function output from server).
This is the xs-app.json I am using for routing
{
"welcomeFile": "index.html",
"authenticationMethod": "route",
"websockets": {
"enabled": true
},
"routes": [
{
"source": "/odata/v4/(.*)",
"target": "/odata/v4/$1",
"destination": "business-partner-api"
},
{
"source": "/",
"target": "/",
"destination": "business-partner-frontend"
},
{
"source": "/node/(.*)",
"target": "/$1",
"destination": "business-partner-node"
}
]
}
The issue is on the /node block the others work fine. I have also noticed another strange issue, is that if I replace the default destination (/) from business-partner-frontend to business-partner-node the app router sucessfully calls the node js server with the authentication being propagated so it appears the issue is somehow related with the xs-app file and not in the destination itself.
I have also unsuccessfully tried to add the port to the destination and adding a staticfile mapping the html5 project but without success.
Anything I might be missing on the node part config?
Best Regards,
The issue is probably in the order of your routes, which is important for the routing. The first match of the current path against source will determine the route. In your case, the / of the second route matches all paths, including /node/....
Reorder your routes so that the node destination comes before the frontend destination.
Suddenly Google Chrome redirects my virtual-host domain myapplication.dev to https://myapplication.dev. I already tried to go to
chrome://net-internals/#hsts
And enter myapplication.dev into the textbox at the very bottom "Delete domain security policies" but this had no effect.
I also tried to delete the browser data.
What I also did is to change the v-host to .app instead of .dev but Chrome still redirected me to https:// ...
It's a Laravel application running on Laragon.
On other PCs in the same network, it works perfectly.
There is no way to prevent Chrome (>= 63) form using https on .dev domain names.
Google now owns the official .dev tld and has already stated that they will not remove this functionality.
The recommendation is to use another tld for development purposes, such as .localhost or .test.
More information about this update can be found in this article by Mattias Geniar.
For Firefox:
you can disable the property network.stricttransportsecurity.preloadlist by visiting the address : about:config .
For IE it seems to be still working .
For Chrome, there is no solution, I think it's hardcoded in the source code.
See that article : How to prevent Firefox and Chrome from forcing dev and foo domains to use https
This problem can't be fixed. Below is the reason:
Google owns .dev gTLD
Chrome forces HTTP to HTTPS on .dev domain directly within the source code.
From the 2nd link below:
...
// eTLDs
// At the moment, this only includes Google-owned gTLDs,
// but other gTLDs and eTLDs are welcome to preload if they are interested.
{ "name": "google", "include_subdomains": true, "mode": "force-https", "pins": "google" },
{ "name": "dev", "include_subdomains": true, "mode": "force-https" },
{ "name": "foo", "include_subdomains": true, "mode": "force-https" },
{ "name": "page", "include_subdomains": true, "mode": "force-https" },
{ "name": "app", "include_subdomains": true, "mode": "force-https" },
{ "name": "chrome", "include_subdomains": true, "mode": "force-https" },
...
References
ICANN Wiki Google
Chromium Source - transport_security_state_static.json
Check that link
https://laravel-news.com/chrome-63-now-forces-dev-domains-https
Based on this article by Danny Wahl he recommends you use one of the following: “.localhost”, “.invalid”, “.test”, or “.example”.
Chrome 63 forces .dev domains to HTTPS via preloaded HSTS
and soon all other browsers will follow.
.dev gTLD has been bought by Google for internal use and can not be used anymore with http, only https is allowed. See this article for further explanations:
https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/
May be worth noticing that there are other TLD that are forced to https: https://chromium.googlesource.com/chromium/src.git/+/63.0.3239.118/net/http/transport_security_state_static.json#262
google, dev, foo, page, app and chrome right now.
MacOS Sierra, Apache: After Chrome 63 forces .dev top level domains to HTTPS via preloaded HSTS phpmyadmin on my mac stop works. I read this and just edit /etc/apache2/extra/http-vhosts.conf file:
<VirtualHost *:80>
DocumentRoot "/Users/.../phpMyAdmin-x.y.z"
ServerName phpmyadmin.localhost
</VirtualHost>
and restart apache (by sudo /usr/sbin/apachectl stop; sudo /usr/sbin/apachectl start ) - and now it works on http://phpmyadmin.localhost :) . For laravel applications solution is similar.
The nice thing is that using *.localhost top level domain when you set up new project you can forget about editing /etc/hosts.
How cool is that? :)
There's also an excellent proposal to add the .localhost domain as a
new standard, which would be more appropriate here.
UPDATE 2018
Using *.localhost is not good - some applications will not support it like cURL (used by php-guzzle) - more details here. Better is to use *.local.
I followed the step-by-step guide here.
I made a simple app that posts a message to the rooms the Integration is installed on per a regex (as described in the tutorial above).
When I initially add the Integration to a hipchat room, it works fine. However, after a period of time it stops working.
The following error appears in my Heroku logs:
JWT verification error: 400 Request can't be verified without an OAuth secret
I assume something with my configuration is wrong or my lack-of-use-of-OAuth, but after googling around I can't find any specific answers on what it should look like.
My config.json looks like this:
"production": {
"usePublicKey": true,
"port": "$PORT",
"store": {
"adapter": "jugglingdb",
"type": "sqlite3",
"database": "store.db"
},
"whitelist": [
"*.hipchat.com"
]
},
And my request handler looks like this:
app.post('/foo',
addon.authenticate(),
function (req, res) {
hipchat.sendMessage(req.clientInfo, req.identity.roomId, 'bar')
.then(function (data) {
res.sendStatus(200);
});
}
);
Any specific direction on configuration and use of Oauth for Hipchat and Heroku would be amazing!
I personally haven't used the jugglingdb adapter with Heroku and don't know if you can actually look into the database, but it seems like somewhere along the way clientInfo disappears from the store.
My suggestion is to start testing locally with ngrok and redis, so that you can troubleshoot locally and then push the working code to Heroku.
Three things I needed to do in order to fix my problem:
Install the Heroku Redis add-on for my Heroku App. (confirm that the Environment Variable for ($REDIS_URL) was added to your app settings).
Add this line to my app.js file:
ac.store.register('redis', require('atlassian-connect-express-redis'));
Change the production.store object in the config.json to be the following:
"store": {
"adapter": "redis",
"url": "$REDIS_URL"
},