AWS EC2 deploy Mean stack project - amazon-ec2

I tried deploying My application I have angular express/Node and Mongo,
I tried to do nginx for Angular, set sites-available nginix config to the index.html of angular after the ng build prod from my local and transferred files to server,
Also i did npm start for Node and also started Mongo,
Now the node shows as started and node listens on port 3001 for me but i see that the Angular isnt able to communicate with backend and I see 404 in console logs of angular.
Also i am not sure about the structure of Mean projects for deployment,
should they be in one project only as in the ng build --prod files should be inside node directories and should be refereed or something

I resolved this issue in the following way:
added a /api in the nodes link(path formed on the server) which is formed when hitting the backend from angular.
wrote a bypass in nginx sites-available/file-name for the /api to be redirected to port 301 which corresponds to node.
for the non /api keep it on the localhost's same port that is 80 in my case or may change if you have configured https.
If you have a better and a standard approach please do answer this seems a workaround for me

Related

Shopify CLI - How to start a local server without ngrok?

My organization blocks ngrok, so every time I run the Shopify serve command, it fails with a connection error.
So is there any way to just start the Shopify local server? that way I can use cloudfared to tunnel the local server to a subdomain.
When I search on google I found no answer to this question.
I had success running the server without the ngrok.
Here are my steps:
Prepare a cloud server, install Nginx.
config domain settings, and forward the request to your local port.
If you are using a router, only router has a public IP, so you need to forward the request to your pc. You could config it in the router.
then you need to update .env file, update host value
Go partner.shopify.com, app settings. put your URL to the whitelist.
use npm run dev to start your project.
I also set HTTPS in nginx. Due to ngrok server is far away from my location. so after using this way. the starting time is much faster.
Start the server by
npm run dev
instead of,
shopify app serve

Move application from homestead to docker

My application consists of three domains:
example.com
admin.example.com
partner.example.com
All of these domains are handled by the same Laravel app. Each domain has its own controllers and view. Models and other core functionalities are shared between all three domains.
Currently my local dev-environment is build with Homestead (based on Vagrant), where each local domain (example.test, admin.example.test and partner.example.test) points to the same directory (e.g. /home/vagrant/app/public).
Because of deployment problems regarding different versions of OS, NPM, PHP, etc. I want to move to docker. I've read a lot of articles about multiple domains or apps with docker. Best practice seems to be to set up an Nginx reverse proxy which redirects all incoming requests to the desired app. Unfortunately, I haven't found examples for my case where all domains point to the same application.
If possible I would avoid having the same repository cloned three times for each docker container running one specific part of the app.
So what would be the best approach to set up a docker environment?
I created a simple gist for you to look at of how I would do it
https://gist.github.com/karlisabele/f7d91594c004e227e504473ce2c60508
The nginx config file is based on Laravel documetation (https://laravel.com/docs/5.8/deployment#nginx) and of course in production you would also want to handle SSL and map port 443 as well, but this should serve as POC for you.
Notice that in the nginx configuration I use the php-fpm service name to pass the request to php-fpm container. In docker the service names can be used as host names for corresponding service so the line fastcgi_pass php-fpm:9000; means that you are passing the request to php-fpm containers port 9000 (default port for the fpm image to listen to)
Basically what you want to do is simply define in the nginx that all 3 of your subdomains are handled by the same server configuration. Then nginx simply passes the request to php-fpm to actually process it.
To test, you can just copy the two files from gist in your project directory, replace YOUR_PROJECT_FOLDER in docker-compose.yml file with the actual location of your project (can be simply .:/var/www/html if you place the docker-compose.yml in the root of your project) then run docker-compose up -d. Add the domains to your hosts file (/etc/hosts un linux/mac) and you should be able to visit example.test and see your site.
Note: Depending on where your database is located, you might need to change the host for it if it's localhost at the moment, because it will try to connect to a mysql server from php-fpm container, which of course does not have it's own mysql-server running.

Deploying Ember JS application in EC2

I would like to deploy ember app in an EC2 ubuntu instance.
I have installed tomcat in EC2 ubuntu instance.
I have run ember build and generated dist files
I dont know how to make the tomcat run the dist files that is generated during the build.
Can someone explain it in a step by step so that I can understand clearly?
I don't think you should be serving the Ember app from Tomcat. At least in the past when I evaluated, Tomcat is much slower at SSL than Apache/Nginx, isn't as fast with static content, requires redeploys of the WAR file if you change static content, and lacks the configuration options of the more commonly used HTTP servers.
Better approach, reverse proxy to your app server (I assume you are running a java app since you are using Tomcat). Serve the Ember app from the reverse proxy. If you are running SSL, you would handle it all from the reverse proxy and not tomcat. This is how I serve my Ember app and my Spring Boot app (the api powering my ember app) from the same EC2 instance.
I'll show you how I do it on redhat but you'll have to translate for ubuntu (as in you have apt-get where i use yum, etc).
Install apache on your VM
yum install httpd -y
Configure apache as a reverse proxy in /etc/httpd/conf/httpd.conf
<VirtualHost *:80>
ProxyRequests Off
ProxyPass /api http://localhost:8080/api
ProxyPassReverse /api http://localhost:8080/api
</VirtualHost>
FallbackResource /index.html
This has two very important parts. First, you run your tomcat serve on http://localhost:8080/ (not on 80!) and have a servlet underneath api or something other subpath. You need this sort of distinction so that your ember urls do not conflict with your api urls. As in, if you are wanting your Ember app to run under / and you have an api endpoint under /users and an Ember route /users, how can you distinguish which should be served? I argue that your api should run under /api/users to avoid collisions.
Second, the FallbackResource /index.html allows unmatched directories to return your index.html file. See, when your browser makes the request: yourapp.com/someRoute to your server, you need your http server to simply return the index.html file. Ember will then take care of the routing on the client side. This means, regardless of what route you are on in Ember, when the browser asks for that url, you should always return index.html. I don't even know how you would configure a rule like this in tomcat (you'll have to research rewrite rules/directives if you don't want to use a reverse proxy).
Lastly, within http.conf find the document root: eg. DocumentRoot "/var/www/html". This is the path on your EC2 server where your static files are served from. This is where the contents of your dist folder belong. For me, a typical deployment means ember build, scp to the server, sudo rm -rf /var/www/html/ and sudo cp -r dist/. /var/www/html to replace the old ember app with the new. This works for me because the contents of my ember app are the only static files that I need to serve. If you have other needs, you can't just delete and replace the old document root like I do.
Make sure httpd/apache is running! service httpd start on redhat. You don't have to restart the server when changing out files.

How to integrate Angular 2 to existing J2EE Spring project and Run it on same PORT?

I have existing project with a Spring on a back-end and AngularJS 1 on front-end. When I run the Spring server - opens just 1 port for me: 8080 and I can access REST APIs and also AngularJS front-end stuff through it.
But now I want to move to new Angular 2.
How do I make it use same port: 8080 both for APIs and Angular 2?
The reason why I asking about this, is that almost each tutorial that I find offers to use Angular CLI(npm install -g angular-cli) https://cli.angular.io/ stuff that installs another lite server with it and then I have to run it on a different port.
How to install minimum required dependencies for Angular 2 and without its own Server?
Like tutorial like that:
https://www.youtube.com/watch?v=HhroyiYFmjc
REST API run on port: 8080
Angular2 on port: 3000
To deploy your app generated by Angular CLI under your Java server, you can run ng build (or ng build --prod) and copy the content of the files generated in the dist dir to your server to the static dir.
In dev mode, if you want to keep running the app with the dev server (it runs on the port 4200 by default), you can configure a proxy by creating a file proxy-config.json in your Angular project with the following content (assuming the api is the part of the deployed URL):
{
 "/api": {
"target": "http://localhost:8080",

 "secure": false
 }

}
Now you can run the app in dev mode using the following command:
ng serve --proxy-config proxy.conf.json
This will allow your dev server (port 4200) to access the REST API on port 8080.

link_to strips port from site hosted in container

This is a bit of a tricky situation. I'm testing deployment of a Laravel application which I've recently containerised. I've made a container based on php, which runs Apache inside itself to serve the application. If I simply run this container, bound to port 5000, then link_to('/login') correctly generates a link pointing to localhost:5000/login.
However, now I'm testing an actual deployment scenario, where this container is running behind an nginx load balancer. I've set up a VM using Vagrant, which is running two containers: one for the nginx load balancer, and one for the Apache/Laravel application. I access the VM's port 80 on my host's port 7000.
In this situation, link_to('/login') now generates links pointing to localhost/login. Where did the port go missing? It should link to localhost:7000/login, because that's the port I'm accessing the page on.
How can I debug this? I've tried looking into the implementation of link_to, but I suspect the problem is elsewhere.
EDIT
I've just discovered that in addition, if I serve the site over HTTPS (terminated at nginx; Apache still does everything over HTTP), this is also stripped from links created by link_to. Instead of https://localhost:7443/login, the link looks like localhost/login.
The solution is to use something like fideloper/proxy to properly handle the proxy headers added by Nginx. I thought I had done this, but I'd forgotten to add the facade to app/config/app.php.

Resources