Deploying Ember JS application in EC2 - amazon-ec2

I would like to deploy ember app in an EC2 ubuntu instance.
I have installed tomcat in EC2 ubuntu instance.
I have run ember build and generated dist files
I dont know how to make the tomcat run the dist files that is generated during the build.
Can someone explain it in a step by step so that I can understand clearly?

I don't think you should be serving the Ember app from Tomcat. At least in the past when I evaluated, Tomcat is much slower at SSL than Apache/Nginx, isn't as fast with static content, requires redeploys of the WAR file if you change static content, and lacks the configuration options of the more commonly used HTTP servers.
Better approach, reverse proxy to your app server (I assume you are running a java app since you are using Tomcat). Serve the Ember app from the reverse proxy. If you are running SSL, you would handle it all from the reverse proxy and not tomcat. This is how I serve my Ember app and my Spring Boot app (the api powering my ember app) from the same EC2 instance.
I'll show you how I do it on redhat but you'll have to translate for ubuntu (as in you have apt-get where i use yum, etc).
Install apache on your VM
yum install httpd -y
Configure apache as a reverse proxy in /etc/httpd/conf/httpd.conf
<VirtualHost *:80>
ProxyRequests Off
ProxyPass /api http://localhost:8080/api
ProxyPassReverse /api http://localhost:8080/api
</VirtualHost>
FallbackResource /index.html
This has two very important parts. First, you run your tomcat serve on http://localhost:8080/ (not on 80!) and have a servlet underneath api or something other subpath. You need this sort of distinction so that your ember urls do not conflict with your api urls. As in, if you are wanting your Ember app to run under / and you have an api endpoint under /users and an Ember route /users, how can you distinguish which should be served? I argue that your api should run under /api/users to avoid collisions.
Second, the FallbackResource /index.html allows unmatched directories to return your index.html file. See, when your browser makes the request: yourapp.com/someRoute to your server, you need your http server to simply return the index.html file. Ember will then take care of the routing on the client side. This means, regardless of what route you are on in Ember, when the browser asks for that url, you should always return index.html. I don't even know how you would configure a rule like this in tomcat (you'll have to research rewrite rules/directives if you don't want to use a reverse proxy).
Lastly, within http.conf find the document root: eg. DocumentRoot "/var/www/html". This is the path on your EC2 server where your static files are served from. This is where the contents of your dist folder belong. For me, a typical deployment means ember build, scp to the server, sudo rm -rf /var/www/html/ and sudo cp -r dist/. /var/www/html to replace the old ember app with the new. This works for me because the contents of my ember app are the only static files that I need to serve. If you have other needs, you can't just delete and replace the old document root like I do.
Make sure httpd/apache is running! service httpd start on redhat. You don't have to restart the server when changing out files.

Related

How to configure proxy for springboot and mediawiki?

Currently I'm trying to run both SpringBoot application and mediawiki server (but I assume it may be any other server) on one machine simultaniously. They are both accessed via different ports e.g. 8080 and 7734.
Now I want to be able to access my Spring app as usual on localhost:8080/homePage and if I type something like this (with wiki included in the beggining of the url) localhost:8080/wiki/faqPage there must be some setting (or maybe another proxy server?) to redirect requests to the mediawiki instance. So the request to localhost:8080/wiki/faqPage would actually go to localhost:7734/faqPage. What is the best practice for achieving this?
If it helps I'm using docker image and docker-compose util to run mediawiki instance
Okay. Apache's ProxyPass and ProxyPassReverse in the httpd.conf did the magic

Move application from homestead to docker

My application consists of three domains:
example.com
admin.example.com
partner.example.com
All of these domains are handled by the same Laravel app. Each domain has its own controllers and view. Models and other core functionalities are shared between all three domains.
Currently my local dev-environment is build with Homestead (based on Vagrant), where each local domain (example.test, admin.example.test and partner.example.test) points to the same directory (e.g. /home/vagrant/app/public).
Because of deployment problems regarding different versions of OS, NPM, PHP, etc. I want to move to docker. I've read a lot of articles about multiple domains or apps with docker. Best practice seems to be to set up an Nginx reverse proxy which redirects all incoming requests to the desired app. Unfortunately, I haven't found examples for my case where all domains point to the same application.
If possible I would avoid having the same repository cloned three times for each docker container running one specific part of the app.
So what would be the best approach to set up a docker environment?
I created a simple gist for you to look at of how I would do it
https://gist.github.com/karlisabele/f7d91594c004e227e504473ce2c60508
The nginx config file is based on Laravel documetation (https://laravel.com/docs/5.8/deployment#nginx) and of course in production you would also want to handle SSL and map port 443 as well, but this should serve as POC for you.
Notice that in the nginx configuration I use the php-fpm service name to pass the request to php-fpm container. In docker the service names can be used as host names for corresponding service so the line fastcgi_pass php-fpm:9000; means that you are passing the request to php-fpm containers port 9000 (default port for the fpm image to listen to)
Basically what you want to do is simply define in the nginx that all 3 of your subdomains are handled by the same server configuration. Then nginx simply passes the request to php-fpm to actually process it.
To test, you can just copy the two files from gist in your project directory, replace YOUR_PROJECT_FOLDER in docker-compose.yml file with the actual location of your project (can be simply .:/var/www/html if you place the docker-compose.yml in the root of your project) then run docker-compose up -d. Add the domains to your hosts file (/etc/hosts un linux/mac) and you should be able to visit example.test and see your site.
Note: Depending on where your database is located, you might need to change the host for it if it's localhost at the moment, because it will try to connect to a mysql server from php-fpm container, which of course does not have it's own mysql-server running.

How to establish a Apache2 proxy for a localhost:3000 node.js based application

I'm trying to figure out how to link my Apache2 server running on AWS Lightsail to an application I'm housing that uses http://localhost:3000 when activated—it's a simple Node.js based CMS called Vapid. I have the server linked to my domain name—bigsheepcollective.com—and I can get Vapid running through the AWS terminal, but it's only the Apache2 landing page that shows up on my domain name. I saw a tutorial here that goes over establishing a proxy pass on an Nginx ran server but I'm not sure how to do the same thing for one using Apache2.
I've tried using the Nginx tutorial and I've also don't some extensive searches into proxy setups for Apache2, but I'm confused about what type of proxy I need when it comes to running an application that uses http//:localhost:3000.
Hi Bitnami Engineer here.
You can include these lines in the /opt/bitnami/apache2/conf/bitnami/bitnami.conf file or in the specific .conf file you created for your application
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
This way you will access your application when accessing the public IP of your instance or its associated domain.
This guide in our documentation explains the whole process to configure a Node.js application on top of Bitnami.
https://docs.bitnami.com/aws/infrastructure/mean/administration/create-custom-application-nodejs/

Separate frontend and backend with Heroku

I have an application, let's call it derpshow, that consists of two repositories, one for the frontend and one for the backend.
I would like to deploy these using Heroku, and preferably on the same domain. I would also like to use pipelines for both parts separate, with a staging and production environment for each.
Is it possible to get both apps running on the same domain, so that the frontend can call the backend on /api/*? Another option would be to serve the backend on api.derpshow.com and the frontend on app.derpshow.com but that complicates security somewhat.
What are the best practices for this? The frontend is simply static files, so it could even be served from S3 or similar, but I still need the staging and production environments and automatic testing and so and so forth.
Any advice is greatly appreciated!
For what you are trying to you must use webserver for serving static content and provide access to container(gunicorn, tomcat, etc...) holding your app. Also this is best practice.
Asume your use nginx as webserver, because its easier to setup. nginx config file would look like this
# Server definition for project A
server {
listen 80;
server_name derpshow.com www.derpshow.com;
location / {
# Proxy to gUnicorn.
proxy_pass http://127.0.0.1:<projectA port>;
# etc...
}
}
# Server definition for project B
server {
listen 80;
server_name api.derpshow.com www.api.derpshow.com;
location / {
# Proxy to gUnicorn on a different port.
proxy_pass http://127.0.0.1:<projectBg port>;
allow 127.0.0.1;
deny all;
# etc...
}
}
And thats it.
OLD ANSWER: Try using nginx-buildpack it allows you to run NGINX in front of your app server on Heroku. Then you need to run your apps on different ports and setup one port to api.derpshow.com and other to app.derpshow.com, and then you can restrict calls to api.derpshow.com only from localhost.
Would just like to contribute what I recently did. I had a NodeJS w/ Express backend and a plain old Bootstrap/vanilla frontend (using just XMLHttpRequest to communicate). To connect these two, you can simply tell express to serve static files (i.e. serve requests to /index.html, /img/pic1.png) etc.
For example, to tell express to serve the assets in directory test_site1, simply do:
app.use(express.static('<any-directory>/test_site1'));
Many thanks to this post for the idea: https://www.fullstackreact.com/articles/deploying-a-react-app-with-a-server/
Note that all these answers appear to be variations of merging the code to be served by one monolith server.
Jozef's answer appears to be adding an entire nginx server on top of everything (both the frontend and backend) to reverse proxy requests.
My answer is about letting your backend server serve frontend requests; I am sure there is also a way to let the frontend server serve backend requests.

Reverse proxy for Lektor CMS

The Lektor server is running on port 5000 on localhost. I want to make it accessible via an Apache URL at http://myhost.org/lektor. Therefore, I've tried the httpd config snippet
ProxyPass /lektor http://127.0.0.1:5000
ProxyPassReverse /lektor http://127.0.0.1:5000
The HTML for the welcome page is found, but the static pages referenced there (e.g., http://myhost.org/static/style.css) are not. How can I make the changed URL known to Lektor?
The development server is not intended to be used for production. The production deployments are based on entirely static content and will resolve static assets correctly if you use the |url filter.
I had success with just slightly different syntax:
ProxyPass / "http://localhost:5000/"
ProxyPassReverse / "http://localhost:5000/"
Seems the trailing slash is needed. I'm not sure whether you need the actual name "localhost" in there, but that's what worked for me.
I did the following additional items with my setup, and it's rather slick:
Set up Lektor on an internal private-ip webserver
Set up two virtual hosts, for example lektor-web-dev.my.domain and lektor-web.my.domain
Set up self-signed certificates on the -dev domain, and used the above ProxyPass items in its vhost config (Apache)
Set up basic authentication on the -dev domain, so there's at least a basic level of authentication
Set up rsync deploy to loop back to localhost using an SSH keypair, and copy the output files to lektor-web.my.domain's webroot
Now, lektor-web.my.domain is still on a private webserver without certificates. However, on my public-facing webserver, I SSL terminate and proxy through to lektor-web.my.domain. I get the certificate with letsencrypt.
This is expandable to any number of websites. Keep the Lektor admins running using supervisor or some other process management system. Use 'lektor server -p portnum' to pick a different port for each site's admin. You can do group-based access to various admins using HTTP basic authentication along with an htpasswd and htgroup file.

Resources