Share a folder between Heroku App - heroku

For my project, I would like to be able to setup multiple websites (App) and according to the requirements from a clients, I will activate and setup modules. Those modules could be a news module, images viewer module and so on.
My engine, which consist of scripts and libraries that manage a few things such as the WebApp routing, the modules, user rights for the CMS. I would like to get this code shared among all my App to avoid unseless duplication.
I would like to know what's the best way to do this on Heroku since I am totally new to this (Heroku) and I am not totally sure if that is feasable.
Also, am I wrong to believe that each websites is an App even if basically the only difference between them are the template and a single setup file?
Thank you

If you have no experience with git get familiar with it.
I would suggest to have one application with different branches in which you keep the configuration specific to the instance.
You create an app instance for each website.
You push the specific branch to the specific app, e.g.
git push git#heroku.com:.git :master

Related

Right way to Run a Vue App with Phoenix application as proxy

Right now, I am running a Vue app within a phoenix app. I first created a phoenix project and then started a Vue app with the name of assets.. for running it in the development environment. I have added
watchers: [npm: ["run", "build", cd: Path.expand("../assets", __DIR__)]]
which each time creates a build which is being used in app.html.eex from priv/static..
and For deployment, I am using phoenix static build pack.. which in production before deploy creates a build before ahead and then run phoenix app. everything is working fine. but its wrong way due to which..
overall benefits of Vue application is not being availed. e.g code splitting/loading code chunk on-page request only. and many other webpack features which we can avail within a Vue app are all not being availed as we just creating the build and putting it in production.
My issue is that. I have seen in may tutorials that run a Vue app with your API as a proxy. and so that the main app will be Vue and Phoenix API will work behind a proxy.
Right now I have this setup to deploy and work in development mode. My question is how I can achieve the opposite to that?
Starting Vue application which will automatically start the phoenix app as well. Also for deploying on Heroku. API will run simply but Vue app will for more functional than just a JS or CSS file static files?
Update: Is it possible to make an umbrella application in which one is Vue and one is phoenix?
This is more of an extended comment, but it was getting really big so I moved it to an answer.
To start two or more apps you will need something like foreman (in heroku, I think there's a buildpack with it) or systemd, or even start a nodejs server from your elixir app (not sure this is doable in heroku).
You can also split manually your components through the webpack config used by phoenix, this can be a bit more involved than just serving a single js file. The reason to split it into two separate services needs to be thought of, but this is usually achieved (if I'm not mistaken, it's been a while since I used it in this way) with having different entry points. For webpack to work the best with splitting assets (css&others) you'll need to also write your vue components in a way that webpack can then understand the dependencies (at least this was the case - and there might be some complexities with the webpack chunking and phoenix digest when using dynamic components, etc).
Other option is to, for instance, use an independent nuxt app, which bundles up a VueJS app with everything you need, webpack, server, vuex, a sensible config and "structure" etc. Now you have two distinct applications, running each an http server, you use asyncData & fetch to populate/hidrate your front-end with data from the phoenix app, you can use async components and all that stuff. Then you deploy the front-end (nuxt app) to one heroku instance and the phoenix server on other instance or somewhere else.
At that point your phoenix app is basically an api for the front-end. So the vue app has to be built with that in mind, and now you've got 2 applications to deploy&take care of. It does help in a lot of fronts but it's also more complexity (authorization, cookies, etc), hence why it needs to be weighted if it's worth doing it. The major benefit is that since they're now 2 apps, styling and things related to the front-end can be deployed without needing to re-deploy the back-end.
Depending on the type of front-end you can also deploy the nuxt app as a static website to something like s3/cloudfront or any other cloud storage engine. For instance if you have like, X public pages that are all mostly static content and everything that is dynamic data is behind a login wall or something, then that is a solution that works fine too.
All 3 ways are valid depending on what you need/reason to do it.

How do I manage micro services with DevOps?

Say I have a front end node and three backed nodes tools, blog, and store. Each node communicates with the other. Each of these nodes have their own set of languages and libraries, and have their own Dockerfile.
I understand the DevOps lifecycle of a single monolithic web application, but cannot workout how a DevOps pipeline would work for microservices.
Would each micro-service get its own github repo and CI/CD pipeline?
How do I keep the versions in sync? Let's say the tools microservice uses blog version 2.3. But blog just got pushed to version 2.4, which is incompatible with tools. How do I keep the staging and production environments in sync onto which version they are supposed to rely on?
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest location of this service?
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices for developing locally with several different services?
Where can I go to learn more?
Would each micro-service get its own github repo and CI/CD pipeline?
From my experience you can do both. I saw some teams putting multiple micro-services in one Repository.
We where putting each micro-service in a separate repository as the Jenkins pipeline was build in a generic
way to build them that way. This included having some configuration files in specific directories like
"/Scripts/microserviceConf.json"
This was helping us in some cases. In general you should also consider the Cost as GitHub has a pricing model
which does take into account how many private repositories you have.
How do I keep the versions in sync? Let's say the tools micro-service uses blog version 2.3. But blog just got pushed to version 2.4, which
is incompatible with tools. How do I keep the staging and production
environments in sync onto which version they are supposed to rely on?
You need to be backwards compatible. Means if your blogs 2.4 version is not compatible with tools version 2.3 you will have high dependency
and coupling which is going again one of the key benefits of micro-services. There are many ways how you get around this.
You can introduce a versioning system to your micro-services. If you have a braking change to lets say an api you need to support
the old version for some time still and create a new v2 of the new api. Like POST "blogs/api/blog" would then have a new api
POST "blogs/api/v2/blog" which would have the new features and tools micro-service will have some brige time in which you support
bot api's so it can migrate to v2.
Also take a look at Semantic versioning here.
If I'm deploying the service tools to multiple different servers, whose IP's may change, how do the other services find the nearest
location of this service?
I am not quite sure what you mean here. But this goes in the direction of micro-service orchestration. Usually your Cloud provider specific
service has tools to deal with this. You can take a look at AWS ECS and/or AWS EKS Kubernetes service and how they do it.
For a monolithic application, I can run one command and simply navigate to a site to interact with my code. What are good practices
for developing locally with several different services?
I would suggest to use docker and docker-compose to create your development setup. You would create a local development network of docker
containers which would represent your whole system. This would include: your micro-services, infrastructure(database, cache, helpers) and others. You can read about it more in this answer here. It is described in the section "Considering the Development Setup".
Where can I go to learn more?
There are multiple sources for learning this. Some are:
https://microservices.io/
https://www.datamation.com/applications/devops-and-microservices.html
https://www.mindtree.com/blog/look-devops-microservices
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/multi-container-microservice-net-applications/multi-container-applications-docker-compose

Heroku: Can staging environment be accessible for only some people?

Let's assume I have staging and production versions of my application. I want staging version should be accessible only to specific people like developers, testers. Do we have that kind of configuration on Heroku like deployed address can be only accessed by users have some key.
There is nothing like this in the Heroku product at this time. You would need to handle this as part of your application in some way.

Need for creating different Projects in Google API console

I have basically two URL's http://xyzwebsite.com (for Development Testing) and http://abcwebsite.com (For Production). I have a simple Login mechanism where a user can click on Google Plus icon to log in rather than using their Username and Password. I created one Project for Development with obviously different Client ID and different for Production with a separate client ID.
But I tested both the URL's above with the client ID of Development project and it worked fine. I am wondering why there is a need ot having multiple projects in Google API console?
There is no particular need. A single project can have several URLs and client IDs for use.
Some reasons you might use multiple projects include:
Changing project settings in dev without worrying about breaking production
If you have a development script that gets into an endless loop or something it might use up all of the quota and the production app might start throwing errors
You might want clear branding on the dev app that explicitly identifies as not production.
Some unknown reason I can't think of.

Creating pages with Nesta on Heroku

I am looking to roll a simple CMS with Ruby and preferably Sinatra. www.nestacms.com looks like a terrific candidate.
Some key objectives:
Allow business users to add/edit/remove pages (not via git but via app functionality)
Deploy on Heroku
As I understand, Nesta pages are generated by static page files in your deployment. Which is fine if you add via git and push to Heroku.
But if you're wanting to create pages (files) in a Heroku web app, this isn't possible due to Heroku's read-only file system.
Looking for help around:
Achieving objectives with Nesta and Heroku
Alternative approaches
Github has an editor to manipulate files and buttons to create/merge branches in browser.
You may want store your content related media on a cloud storage services (such as dropbox, google drive, cloudflare etc.) You also can connect/mount these storages like drives via WebDAV or some other tools depending on your prefered service provider & OS.
snap-ci offers free and easy to use integration service for automated tests and/or deployment (heroku or your VPS/server). When master branch changed github webhook triggers snap-ci and your website will deploy in average 1 minute.

Resources