Serving a website using Caddy - amazon-ec2

I have created an application and want to serve it using caddy.
On my localhost if I run the application on 127.0.0.1:9000 and set it as proxy in
the caddyfile it works. I figured I have to serve my website similarly on my production as well.
Now I am trying to serve it on my ec2 instance. I tried serving it on localhost, 127.0.0.1 and even the domain directly itself but caddy does not work here. Oneof the things I noticed is that the url is automatically changing from http tp https, I figure it means that at least caddy is running and recognizing the request but is not actually able to find the content.
Below is my CaddyFile.
abc.xyz.com {
proxy / zbc.xyz.com:9000 {
transparent
}
}

Related

Should I redirect Https to http in alb aws?

So I have a load balancer connected to an ec2 instance. The ec2 has a php website running on port 8000 hosted in iis 8.5. Now http health check is passing after adding binding in iis for port 8000, but https health check is failing. But since in iis, I have used URL rewrite to redirect all http into https, thus even if load balancer's https health check is failing I can still access website on https connection.
But I really want to make my health check for https pass.
So for that I figured out, I either run https application inside ec2 on a different port than 8000 and add a binding for it (dropped the idea cause client didnot want) OR,
Redirect https target group to http target group.
Is this possible? If yes, how?

Target group 443 gives Health checks failed with these codes: [502]

I wanted to deploy a Laravel website to amazon, so I did the following steps:
Deployed the Laravel App using Elastic Beanstalk
Configured Route:53 A instance to point to the Ip of Ec2
Created Application Load Balancer with two listeners one at 80 and one at 443
Created 2 target groups Tg80 and Tg443 and designate the listener respectively
Note that Tg443 has a valid SSL certificate
Changed the security group of the Ec2 to be the Load balancer's one
Changed the A instance in Route:53 to be the load balancer's
**Results: **
The site works perfectly on port 80 with http, same for health check, and I can acces the site normally from any browser
The site returns [502 Bad Gateway] on https:443
In (After ssh to instance) /var/log/httpd/error_log I have the following error /var/www/html/.htaccess: RewriteCond: bad flag delimiters
So, I tried, According to the link enforce-https-laravel:
To configure .htaccess in the laravel app as said in the link, refreshed everything `php artisan config:cache, retried health check but Same Results
The I deleted .htaccess and configured app/Providers/AppServiceProvider.php:
use Illuminate\Contracts\Routing\UrlGenerator;
public function boot(UrlGenerator $url)
{
if(env('ENFORCE_SSL', false)) {
$url->forceScheme('https');
}
}
And added ENFORCE_SSL=true in .env and then php artisan config:cache as said in the same link it is a newer way than .htaccess.
But Same Results
I don't know what to do net or how to fix this. I want to be able to access the site with ssl. Please Help. Thank you.
Based the comments, the issue was that the health checks were set to use HTTPS between ALB and EC2. However, since ALB terminates the SSL connections, all traffic between ALB and EC2 is in HTTP, not HTTPS.
Therefore, the solution to not working health checks was to use HTTP for them, rather then HTTPS.

How to establish a Apache2 proxy for a localhost:3000 node.js based application

I'm trying to figure out how to link my Apache2 server running on AWS Lightsail to an application I'm housing that uses http://localhost:3000 when activated—it's a simple Node.js based CMS called Vapid. I have the server linked to my domain name—bigsheepcollective.com—and I can get Vapid running through the AWS terminal, but it's only the Apache2 landing page that shows up on my domain name. I saw a tutorial here that goes over establishing a proxy pass on an Nginx ran server but I'm not sure how to do the same thing for one using Apache2.
I've tried using the Nginx tutorial and I've also don't some extensive searches into proxy setups for Apache2, but I'm confused about what type of proxy I need when it comes to running an application that uses http//:localhost:3000.
Hi Bitnami Engineer here.
You can include these lines in the /opt/bitnami/apache2/conf/bitnami/bitnami.conf file or in the specific .conf file you created for your application
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
This way you will access your application when accessing the public IP of your instance or its associated domain.
This guide in our documentation explains the whole process to configure a Node.js application on top of Bitnami.
https://docs.bitnami.com/aws/infrastructure/mean/administration/create-custom-application-nodejs/

Separate frontend and backend with Heroku

I have an application, let's call it derpshow, that consists of two repositories, one for the frontend and one for the backend.
I would like to deploy these using Heroku, and preferably on the same domain. I would also like to use pipelines for both parts separate, with a staging and production environment for each.
Is it possible to get both apps running on the same domain, so that the frontend can call the backend on /api/*? Another option would be to serve the backend on api.derpshow.com and the frontend on app.derpshow.com but that complicates security somewhat.
What are the best practices for this? The frontend is simply static files, so it could even be served from S3 or similar, but I still need the staging and production environments and automatic testing and so and so forth.
Any advice is greatly appreciated!
For what you are trying to you must use webserver for serving static content and provide access to container(gunicorn, tomcat, etc...) holding your app. Also this is best practice.
Asume your use nginx as webserver, because its easier to setup. nginx config file would look like this
# Server definition for project A
server {
listen 80;
server_name derpshow.com www.derpshow.com;
location / {
# Proxy to gUnicorn.
proxy_pass http://127.0.0.1:<projectA port>;
# etc...
}
}
# Server definition for project B
server {
listen 80;
server_name api.derpshow.com www.api.derpshow.com;
location / {
# Proxy to gUnicorn on a different port.
proxy_pass http://127.0.0.1:<projectBg port>;
allow 127.0.0.1;
deny all;
# etc...
}
}
And thats it.
OLD ANSWER: Try using nginx-buildpack it allows you to run NGINX in front of your app server on Heroku. Then you need to run your apps on different ports and setup one port to api.derpshow.com and other to app.derpshow.com, and then you can restrict calls to api.derpshow.com only from localhost.
Would just like to contribute what I recently did. I had a NodeJS w/ Express backend and a plain old Bootstrap/vanilla frontend (using just XMLHttpRequest to communicate). To connect these two, you can simply tell express to serve static files (i.e. serve requests to /index.html, /img/pic1.png) etc.
For example, to tell express to serve the assets in directory test_site1, simply do:
app.use(express.static('<any-directory>/test_site1'));
Many thanks to this post for the idea: https://www.fullstackreact.com/articles/deploying-a-react-app-with-a-server/
Note that all these answers appear to be variations of merging the code to be served by one monolith server.
Jozef's answer appears to be adding an entire nginx server on top of everything (both the frontend and backend) to reverse proxy requests.
My answer is about letting your backend server serve frontend requests; I am sure there is also a way to let the frontend server serve backend requests.

Redirect :80 to :443 (http to https) with Wakanda Server

I've set up a Wakanda server hosted on an Amazon EC2 instance, that has SSL certificates installed as per the Wakanda documentation and accessing the home page via https easily enough, but won't redirect incoming traffic on port 80 to 443 automatically.
Being an Amazon AWS instance with an elastic IP, I've tried to set up a load balancer to handle the traffic routing for me as a possible solution. Though while that reports that it's routing "Load Balancer Port = 80" to "Instance Port = 443", it doesn't seem to be redirecting traffic either.
I may be missing something entirely in the way the Load Balancer is supposed to work, but is there a way for the Wakanda Server to automatically route incoming http traffic to https? Edit: I have also tried to set up a .htaccess file in my webFolder directory to manually try to redirect traffic, though I'm finding very limited documentation around whether that is a viable option in itself too.
Thanks!

Resources