Design of cloud Microservice on Heroku advice [closed] - heroku

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am new to the world of microservice and I have tried to learn about it and how it could be apply to my needs. I need to design a cloud plaform easily maintenable and scalable with the following (as far as I see them) :
Rails API + PostgreSQL (microservice 1)
Frontend framework (microservice 2)
Some Python script (microservice 3)
Some other Python script (microservice 4)
Inspired by this question & answer, each microservice is a separate Heroku app. What about the security between them when they talk to each other and the response time?
Also, since the service is meant to grow, it would be expensive sooner or later, how to optimize cost in this situation ? I just discovered CaptainDuckDuck but I'm afraid of the "lack" of experience from its user base since it's quite new and not as much popular as other PaaS. Is the only solution is to go to something like DigitalOcean or AWS EC2 and manage by ourselves the job that Heroku does ?
Because doing microservice like this, is not really a microservice design since all the services are not hosted on the same machine, am I right ?
A more microservice-friendly approach would be to use Heroku Private Spaces (even if that doesn't answer the cost issue) ?
For information, I have this design already up and running. So it's not a matter of "will it work?", but more "is it the right way?".
Thanks for your feedbacks

In theory, you could indeed deploy each of your microservices as a completely separate Heroku app, as you suggested.
However, depending on your requirements, a MUCH simpler, possibly better and almost definitely cheaper approach may be to deploy them as separate microservices in one polyglot Heroku app, using Heroku dynos.
For example, you could run your Rails API as the web dyno of your single app. In that case you might want it to also serve your front-end framework.
You should consider using Heroku Postgres as a managed DBaaS. It will be a breeze to connect your Rails web dyno to your Heroku Postgres instance.
You might then want to define each of your python scripts as a separate process type in your Procfile. You need to do so if you need them to be "always on".
Alternatively, depending on your requirements, you might want to consider using one-off dynos for your python scripts.
At any rate, your python scripts will run on separate dynos.
Note that each of your process types can be separately scaled.
One thing you need to consider is how your microservices interact (if you need them to do so). There are many ways to approach this, but note that only your Web dyno instances can listen on http/s traffic. See here for some ideas on this.

Related

How to scale a Spring Boot app to keep the same performance? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 months ago.
Improve this question
My question is theoritical (I am not asking the steps about scaling) and related to keep the same performance.
For example our web site (Spring Boot based) is visited 100 person / day and after a year is şs started 1.000.000 visit per day. In this situation, I have the following ideas basically, but need to know more and if these ideas are good or bad:
Using Cloud services
LOad balancer
Using microservices and applying distributed system techniques.
If read operations are much more than write or update, a NoSQL db can be used.
If we use jwt token for authentication, dstributed system would not a problem for security auth side I think.
... etc.
Could you pls share your ideas and comment the idea above? Any help would be appreciated.
There have been several POC( proof of concept ) and proved deployment strategies for better availability.
Keeping your points, I am summarizing and possibly giving a bit more clarity!
Using Cloud services --> This is the platform you choose for e.g. One can choose on-premise service deployment or on cloud such as AWS,Azure GCP etc. Not related to scalability question at the moment.
Load balancer --> Balance the load when you have multiple instances of your Microservice, so for e.g. You can create docker images of your microservice & deploy as a Pod on Kubernetes platform where you can have more than one Replicas (Replica is copy of your same service). Load balancer will balance the HTTP requests among multiple pods.
Using microservices and applying distributed system techniques --> You can but make sure to adhere to best practices and proven Microservice deployment strategies. Read more about the more about them here https://www.urolime.com/blogs/microservices-deployment-strategies/
If read operations are much more than write or update, a NoSQL db can be used. --> Definitely, infact you can decompose your microservice based on number of transactions or read/write operations & you can use NoSql DB like Couchbase or MongoDb
If we use jwt token for authentication, dstributed system would not a problem for security auth side I think. --> Again such mechanisms are usually centralized and JWT token has some time validity!
So there might be several other options of scaling but most used is the one I mentioned in point 2.
I highly suggest you get a grip on basics, Here are few links which would be helpful!
https://microservices.io/patterns/microservices.html
https://medium.com/design-microservices-architecture-with-patterns/decomposition-of-microservices-architecture-c8e8cec453e

Clearing up misconceptions about amazon(EC2) and rackspace [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm friends with an owner of a small creative business (with multiple departments) and until now they have been using a dedicated server (via a 3rd party) for a lot of internal projects and they've been known to iframe a few small dev projects (like photo galleries, one page sites etc...) off and on for some of their clients (some with hi traffic sites).
They're looking to switch from the dedicated server to a cloud environment. The owner is enamored with amazon's cloud services, but still wanted some alternative options they also want the new environment to mirror the current one as much as possible (linux/centOS, PHP 5.3, mysql databases) but with the ability to scale when desired.
So the misconceptions I need cleared up and questions I have are:
1) I always assumed amazon's cloud service was more suitable for high end high traffic complex web application (Netflix, pinterest, instagram etc...) rather than the typical server use listed above. Is this correct?
2) Is it possible to mirror their current setup on amazon?
3) If number 1 is not true, but they instead chose rackspace, could they run heavy web apps like Netflix, pinterest, instagram on a rackspace cloud server if they ever decided to do something that advanced (is rackspace scaleable in the same way ec2 is)?
1) Amazon AWS is also suitable for this environment, or even smaller ones (they offer instances as small as "Micro", which are far less capable than what you are describing all the way up to GPU compute clusters).
2) Yes. That is a very common setup for an AWS-based solution. In fact, I recently migrated something similar from Rackspace to AWS.
3) #1 is true. However, you can certainly mix what runs on Rackspace and in the AWS cloud. Keep in mind latency and security issues if the two component solutions need to communicate with each other. Rackspace also has a cloud offering, but it is not as mature as Amazons.

Scaling a Meteor app on Heroku

In an answer to another question, it's noted that "Apps deployed to the hosted servers with 'meteor deploy' do not yet have any guarantees or SLAs about scaling." So that rules out the possibility of using their hosted servers if I want to be sure I can fully scale, now.
The answer further notes that "A server bundle generated with 'meteor bundle' is basically a single process app. It is up to you wire it up to multiple instances, or however you want to implement auto-scaling."
After reading that, I'm still very unclear on the question of scaling. On Heroku, I assume I can run "meteor bundle" single process apps in dynos. But if I use many dynos, each running a Meteor server bundle, is Meteor designed so that they can be wired up so that they are all synchronized with the same data (even if there's a lag)?
Answering my own question, the Meteor team has announced a roadmap which includes the scalability plans, for inclusion in Meteor 1.0.
Meteor is still very young platform. Before scalability,personally i would put question of security, as Meteor right now is having no security model in public release. Also no mention of security in Meteor docs, but Meteor team has confirmed that they are working on it, and future release will have it. Have a look here: https://stackoverflow.com/questions/10100813/when-can-we-expect-data-validation-and-security-in-meteor
So I think you and I (for security implementation) have to wait for more releases and perhaps before 1.0 scalability will be handled internally, or atlease they should have documentation explaining how to do that.
To get some idea about, how scalability will be handled and to get better picture on it, I think someone from meteor team should answer about scalability.
You can deploy meteor apps into Heroku but you need to stick with 1 dyno. Because Heroku does not support WebSockets or Sticky Sessions.
So you need to find another PAAS provider. Nodejitsu is a good option.If you wan't to scale into multiple instances, you need to find a way to sync write operations between instances.
Then You'll need Meteor Cluster - http://goo.gl/2aHJ2
I recently asked a similar question (Which PaaS would be best for a Meteor JS app that needs to be scalable?), and one of the answers explained the Heroku situation very well (I thought) - see https://stackoverflow.com/a/16468418/2311632 . It is also pointed out (https://stackoverflow.com/a/16468609/2311632) that one could deploy on meteor.com. While scaling is still on the roadmap, presumably they have or are addressing some scaling issues 'in-house', or can otherwise keep their service at the cutting edge of what's possible in scaling for Meteor Apps. Otherwise, you could go with EC2 and scale vertically (boost the power of a single instance) until Meteor hits the mark with official scaling solutions. Getting set up with EC2 is new to me, but this answer (https://stackoverflow.com/a/16468826/2311632) looks like a good starting point. I haven't tried it yet, but likely will soon.

Will it ever be possible for developers to not have to worry about server configuration? Should we have to worry about this? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently looking at hosting solutions for my Ruby on Rails SaaS web application, and the biggest issue I see is that if I go with something like Amazon EC2, then I still need to configure my own server and install what I need (e.g. database, programming framework, application server, etc.). Each one of these is an opportunity for something to go wrong. I also have to worry about how my data is getting backed up, how frequently, and a host of other "low-level" details. Being a startup I don't have the resources for a sysadmin so would have to play one myself. I currently do some work for a startup and my boss is always talking about how great EC2 is because it let's us "get out of the hardware business" - in reality though, it doesn't feel that way because we still have to set up the server instances, still have to install software, still have to configure the software properly. It feels like we're still in the hardware business, just that we don't really own the server we're using.
In contrast is a service like Heroku (which actually uses EC2 underneath, I believe) but basically takes care of all the low-level details. They do automatic backups for me, I just specify the frequency. They have a server configuration already set up. They have ways to manage it and keep it running so I don't have to monitor traffic. I can focus on my application and just deploy the code, and let them worry about administration and making sure the database is properly configured with the web server and the right folders have permissions.
The problem with Heroku is obviously that I don't have control over these things if I wanted to modify it. Heroku uses nginx as it's web server; if I want to use Phusion Passenger on Apache to stay on the "cutting edge" of RoR development, I'm SOL. If I need to make a quick patch in production (Root of all evil, I know, but it happens sometimes) I don't have SSH access to Heroku's servers. If I need to set up a new database user to allow somebody else to remotely access data, I don't think I can do this. And worst of all if something does happen with the server, I have no way of doing anything except wait for Heroku to fix it.
Basically at what point, if ever, can we as developers focus on our code and application and not have to play sysadmin with server configuration? As a startup with limited resources and limited knowledge of configuring servers (enough to get by), would I be better off sacrificing some configurability for the ability to let somebody else worry about the hardware/software end of things?
Make the server config part of your project and use scripts to setup and tear down your servers. Keep everything under VCS and use the scripts routinely to recreate your development setup.
https://stackoverflow.com/questions/162144/what-is-a-good-ruby-on-rails-hosting-service/265646#265646
I'm not interested in learning how to
configure Apache, ModRails, Phusion,
Mongrel, Thin, MySQL, and whatever.
With Heroku I don't worry. nginx is
the web server, and PostgreSQL is the
database. They have settled on
Ruby/Rack for all new apps. Frameworks
that run on Rack include Rails, Merb,
and Sinatra. Limited choices.

Hosting, deploying and running web applications in the cloud [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
So far I've read some blog articles about cloud computing and services for hosting applications in the grid.
If I'd wanted to have a web application running in the cloud for as little cost as possible, what would be the best solution?
Let's assume the following configuration:
J2EE web application
Any free database (MySQL, PostgreSQL)
Any web container to deploy the web application to
What application stack would you suggest to be the best combination of services to
host
deploy
run
web applications?
As an additional requirement, the services chosen shouldn't require a lot about server management like firewall settings etc.
This space is changing very quickly right now so I think you will find a lot of different good answers. If I where to do something on the cheap right now I would probably pick the following stack:
Web server: apache
App server: tomcat - use the clustering support if you need to grow or split at the apache level or even introduce a load balancer box at the very front
DB server: MySql - mainly because it is easy to cluster
Platform: scalr - The cloud setup is simple and cheap. It uses Amazon's cloud on the backend and that gets you a lot of extras like putting servers in different datacenters for redundancy.
Now you can add in or remove parts of this. You may not need a web tier out there and can just expose tomcat directly. You may need EJBs and in that case you can just fire up more nodes for that and create another tier. You may want to add a tier for load balancing in front of apache. You may want to use the Amazon cloudfront service to push static files to their edge network.
I have investigated Amazon's ec2 solution recently. It is quite good and there are many pre-built boxes that you can use if you find one that suits your need. I think there will still be some server management involved...you cannot get away from that. But the pre built boxes will make it easier.
The cost is reasonable as you only pay for what you use.
[EDIT] The pre-built boxes are called Amazon Machine Images (AMIs).
I think you can get no where closer to Jelastic. It has all the stuffs that #carson mentioned. Specially I will mention their unique web console and they do not have any dependency for any API or console to be installed. I use their platform for many of the clients for my startup. Also additionally you get a nginx support for load balancing and configuring it right away from the console.

Resources