My wildcard subdomains are not working when I am using a load balancer. I have edited the nginx config so the domain is .xxx.com on both the load balancer and both of my app servers. The servers are setup using Forge.
When I visit a subdomain, the app interprets it as the main domain. For example, visiting subdomain.xxx.com shows me the homepage of xxx.com, and visting subdomain.xxx.com/blog shows me xxx.com/blog (which is a 404). The URL also changes in the browser and doesn't include the subdomain.
The same code works on my staging server, which leads me to believe that the load balancer is causing the issue. I don't have a LB on the staging server.
I have restarted nginx, cleared the route and config cache.
Looking at the request in Telescope, I see that host is set to the domain (not subdomain).
Why is the subdomain not working when using a load balancer?
Turns out the DNS hadn't propagated yet. Weird result.
Related
I have the following setup:
React.js App on Cloudfront (example.eu) -> Certificate for *.example.eu and example.eu
Fargate Python FastAPI instance on port 5000
Load Balancer internet facing http://***.eu-central-1.elb.amazonaws.com/
I can visit my website https://example.eu just fine
So in my front-end I defined the Load Balancer URL for doing the requests to the Fargate instance --> GET http://***.eu-central-1.elb.amazonaws.com/users.
I clicked on the button on the website to fire the request to the backend but I get a mixed content error in the browser.
Well, I thought let's do the calls over https - I added a HTTPS on 443 listener and added the certificate created earlier. And if I deactivate the SSL verification (e.g. in Postman) that works fine but else I get in my browser the following error:
VM11:1 GET https://***.eu-central-1.elb.amazonaws.com/users net::ERR_CERT_COMMON_NAME_INVALID
Do I need another certificate for the load balancer URL? I checked out a lot of tutorials and they only create one for the domain.
Do I need to add the certificate to my back-end?
I'm really confused how I can establish a proper https communication from example.eu over the load balancer https://***.eu-central-1.elb.amazonaws.com to my Fargate backend on port 5000.
Thanks
Found the solution:
Go to your Route 53 and add an A entry with Alias Target to the ALB.
Important: Add a subdomain in the name field: e.g. api.example.eu.
That's it :)
I have a Laravel 7 APP with two instances behind an NGINX load balancer with SSL terminating at the load balancer, I've set up Trusted Proxies as described in the Laravel documentation which is working as expected and all traffic is using HTTPS. I have both Laravel instances using the same Redis server for session and a separate Redis server for cache, and both instances are using the same session domain in .env
Both Laravel servers work correctly if they are the only instance in the load balancer. However when both Laravel instances are added to the load balancer any Socialite login fails with an invalid state error.
HTTP 500 Internal Server Error
Laravel\Socialite\Two\InvalidStateException
AbstractProvider->user()
/app/Http/Controllers/Auth/LoginController.php (line 108)
// Get google user data
$google = Socialite::driver('google')->user();
I have the same issue with both Google and Facebook logins. If I try to login manually or register a new user I get 419 | Page Expired, but none of these issues occurs when the load balancer has only one instance or I don't us a load balancer.
Thanks,
Lee.
Ok so I've fixed the problem, I rebuilt the .env file and then cut and pasted into both servers, I then generated new keys and restarted the app servers and the load balancer.
I'd gone through and checked both .env files several times, so either I missed something or the issue wasn't visible ?
Either way it's working now.
Thanks,
Lee.
I have a website -- portaldevservices.com
The domain is managed by route 53 and works fine with http.
I have one ec2 instance.
I recently decided to move to https and put a load balancer in front of the ec2 instance.
From here I created a load balancer edited the A record and the Cname to the credentials of the load balancer. The health check is fine and the ec2 instance was added.
Using Amazon Certificate manager I created a cert and added it to the load balancer.
Here are some credentials/info:
When I try to access https://portaldevservices.com I get this:
Website screenshot
hosted zones
load balancer port config
load balancer basic config
load balancer listener
acm certificate
Thanks for the help. I'm a mobile dev so this is my first time really stepping into the backend world.
Solved:
Ok that was a lot easier than I thought. If anyone else experiences this issue all I had to do was add the "www." to the front of my A type
From portaldevservices.com -> www.portaldevservices.com
The https access now works well.
Ok that was a lot easier than I thought. If anyone else experiences this issue all I had to do was add the "www." to the front of my A type
From portaldevservices.com -> www.portaldevservices.com
The https access now works well.
I have a domain on Godaddy and using amazon Route 53 hosting. I want to create a subdomain and make it point to a subdirectory in my site. How is it possible?
I Have Tried
Using S3 bucket, but s3 settings say host a static site. My site isn't static so I believe that option won't work
I have added a subdomain on route 53 with the help of this article
How do I create a subdomain for a domain hosted through Route 53?
and then changed my server settings to make new domain point to a subdirectory using this answer
How to point domain name to Amazon EC2 subdirectory. But it didn't work. Web page shows DNS server not found
Any kind of help will be appriciated. Thanks in advance.
DNS resolves a domain name to the IP address of your server. It only resolves the first part of a URL that defines the server -- it is not involved in the remainder of the URL.
For example:
http://example.com/path/index.html
DNS converts example.com into the IP address of the server. The request for /path/index.html is then sent to port 80 of that server.
Therefore, it is not possible to configure Amazon Route 53 (nor any DNS server) to point to a subdomain of your site.
You could, however, configure your web server to recognize requests going to different domain names and serve different content to the user. For example:
http://images.example.com/foo.jpg
DNS will resolve images.example.com to the same IP address, but the web server can notice that the original request was to images.example.com, so it should serve a different set of content, or content from a desired subdirectory. This configuration would be done within your web server. If that's what you'd like to do, please consult your web server documentation or search the web for that topic.
I had the same issue.
The solution was for me to set the load balancer (Application Load Balancer) as target for sub.mydomain.com and then in the load balancer listener rules, add a rule for the subdomain (as host header value) with a redirect.
I have a CDN for my website that uses Nginx and Drupal.
In my nginx configuration, I am trying to enable page level caching so requests like "website.com/page1" can be served from the CDN. Currently, I am only able to serve static files from the CDN(GET requests on 'website.com/sites/default/files/abc.png').
All page-level requests always hit the back-end web server.
What nginx config should I add in order for "website.com/page1" requests to also be served from the CDN?
Thanks!
If I understand you correctly, you want to setup another Nginx so that it works as a basic CDN in front of your current webserver (Nginx or Apache??) on which Drupal resides. You need to have a reverse proxy Nginx server to cache both static assets and pages. Since its not super clear to me what you wrote, this is what I assumed.
If you want a setup like this, then you should read the following article on how to setup reverse proxy