Dockerized spring boot app with multiple (dynamic) datasources - spring-boot

I'm working on a Spring Boot application, using AbstractRoutingDatasource in order to provide multitenancy feature. New tenants can be added dynamically, and each one has it's own datasource and pool configuration, and everything is working well, since we only have around 10 tenants right now.
But I'm wondering: since the application is running on a docker container, with limit resources, as the number of tenants grows, also more and more threads will be allocated for each connection (considering a pool from 1 to 30 threads for each tenant) and the container, at some point (with 50 tenants, for example), will be killed due to memory limit defined at container startup.
It appears to me that, this multitenancy solution (using AbstractRoutingDatasource) is not suitable to an application designed to be containerized since I can't simply scale it horizontally to deal with more tenants.
Am I missing something? Should I be worried about that?

The point made in the post is about the system resource exhaustion that might arise with the increasing volume of requests as a result of increased tenants in the system. I would like to address few points
The whole infrastructure can be managed efficiently using ECS & AWS Fargate so that when there is a huge load, there are automatically new containers spun up to take the load. In case of having separate servers, ELB might be of help. There will be no issues when spinning up new containers / servers as your services are stateless
Regarding the number of active connections to a database from your application, you should profile your app and understand the DAP data access patterns. Any master data or static information should NOT be taken always from the database (Except for the 1st time), instead they should be cached. There are many managed cache services that can help you scale better.
In regards to the database, it is understood that tenants have their own databases, in case of a very large tenant, try to scale out the databases as well.
Focus on building the entire suite of features using async features in JAVA or using RxJava so that the async nature will help managing the threads.
Since you have not mentioned what cloud your applications will be deployed, I have cited sample using AWS. However most of the features can be used across Azure , GCP or AWS.
There are lot of strategies to scale, the right understanding of the business needs and data usage patterns etc... could help us decide the right approach.
Hope this clarifies.

Related

How to separate different parts of laravel application?

I have a huge Laravel application, it contitutes of a dashboard where users have many different complex cruds that are all saved in a database with more than 100 tables, it also have an api for mobile app that can reach a peack of 300 thousand requests per minute. As the app scales I'm having issues with performance, as all is in one single aws hosted ec2 server, by all I mean all app images, company logos etc, all the resources for the dashboard and all the api for mobile app. I need a solution for this problem, should I separate all in different machines? If so, how?
All the app is currently running PHP 7.2 and Laravel 5.5 on a aws ec2 12xlarge instance.
You are asking us some basic concept of scalability in the Cloud.
I will try to give one direction you could follow.
The current design is very bad for couple of reasons:
As you said, it cannot scale because everything hold in one server;
Because everything hold in one server, I hope you have automated backup in case your instance fails
The only thing you can do in this configuration is to scale vertically, instead of horizontally (using more instances instead of a big one)
Files are on the same disk, so you cannot scale
In term of application (Laravel), you are running with a monolith: everything in one app. I don't need to tell you that it doesn't scale well but it can.
Lets dive into to main topic: How to scale this big fat instance?
First of all, you should use a shared space for your images. There are NFS (expensive), S3 (cheap) and shared EBS (cheaper than NFS, but can only be used by a limited number of instances at a time). I would use S3.
We can skip the part where you need to refactor your monolith application to a micro-service architecture, with smaller parts. It can be done if you have time and money, but I would say it is not the priority to your scaling issue.
I don't know if the database is also on the same EBS or not. If it is, use RDS: it is an almost no management managed database. You can have multi-AZ for very high availability, or Multi-AZ DB Cluster (new) which will spread the load for reads into 2 shadow instances.
To go further with your application, you can also run mobile and web on separated instances, to avoid one impacting the other.
And...That's all! Laravel has a transparent configuration mechanism for the storage to easily switch from one to another.
When I say "That's all", I mean in term of way to improve the scaling.
You will have to migrate the data from the EC2 database to RDS, perform the transfer of your images from the EBS to S3, create an autoscaling group, create an IAM Instance role for your EC2 Autoscaling group to access S3, know when the application has peaks so you can do a predictive scaling, etc.
I would recommand using IaC for this, like CloudFormation or Terraform.
This is the tip of the iceberg, but I hope you can start building a more robust system with these tips.

What technology to use to avoid too many VMs

I have a small web and mobile application partly running on a webserver written in PHP (Symfony). I have a few clients using the application, and slowly expanding to more clients.
My back-end architecture looks like this at the moment:
Database is Cloud SQL running on GCP (every client has it's own
database instance)
Files are stored on Cloud Storage (GCP) or S3 (AWS), depending on the client. (every client has it's own bucket)
PHP application is running in a Compute Engine VM (GCP), (every client has it's own VM)
Now the thing is, in the PHP code, the only thing client specific is a settings file with the database credentials and the Storage/S3 keys in it. All the other code is exactly the same for every client. And mostly the different VMs sit idle all day, waiting on a few hours usage per client.
I'm trying to find a way to avoid having to create and maintain a VM for every customer. How could I rearchitect my back-end so I can keep separate Databases and Storage Buckets per client, but only scale up my VM's when capacity is needed?
I'm hearing alot about Docker, was thinking about keeping db credentials and keys in a Redis DB or Cloud Datastore, was looking at Heroku, AppEngine, Elastic Beanstalk, ...
This is my ideal scenario as I see it now
An incoming request is done, hits a load balancer
From the request, determine which client the request is for
Find the correct settings file, or credentials from a DB
Inject the settings file in an unused "container"
Handle the request
Make the container idle again
And somewhere in there, determine based on the the amount of incoming requests or traffic, if I need to spin up or spin down containers to handle the extra or reduced (temporary) load.
All this information overload has me stuck, I have no idea what direction to choose, and I fail seeing how implementing any of the above technologies will actually fix my problem.
There are several ways do it with minimum efforts:
Rewrite loading of config file depending from customer
Make several back-end web sites on one VM (best choice i think)

Camunda: architecture and decisions for a high-performance, dynamic multi tenant app

I'm in charge of building a complex system for my company, and after some research decided that Camunda fits most of my requirements. But some of my requirements are not common, and after reading the user guide I realized there are many ways of doing the same thing, so I hope this question will clarify my thoughts and also will serve as a base questión for everyone else looking for building something similar.
First of all, I'm planning to build a specific App on top of Camunda BPM. It will use workflow and BPM, but not necessarily all the stuff BPM/Camunda provides. This means it is not in my plans to use mostly of the web apps that came bundled with Camunda (tasks, modeler...), at least not for end users. And to make things more complicated it must support multiple tenants... dynamically.
So, I will try to specify all of my requirements and then hopefully someone with more experience than me could explain which is the best architecture/solution to make this work.
Here we go:
Single App built on top of Camunda BPM
High-performance
Workload (10k new process instances/day after few months).
Users (starting with 1k, expected to be ~ 50k).
Multiple tenants (starting with 10, expected to be ~ 1k)
Tenants dynamically managed (creation, deploy of process definitions)
It will be deployed on cluster
PostgreSQL
WildFly 8.1 preferably
After some research, this are my thoughts
One Process Application
One Process Engine per tenant
Multi tenancy data isolation: schema or table level.
Clustering (2 nodes) at first for high availability, and adding more nodes when amount of tenants and workload start to rise.
Doubts
Should I let camunda manage my users/groups, or better manage this on my app? In this case, can I say to Camunda “User X completed Task Y”, even if camunda does not know about the existence of user X?
What about dynamic multi tenancy? Is it possible to create tenants on the fly and make those tenants persist over time even after restarting the application server? What about re-deployment of processes after restarting?
After which point should I think on partitioning of engines on nodes? It’s hard to figure out how I’m going to do this with dynamic multi tenancy, but moreover... Is this the right way to deal with high workload and growing number of tenants?
With my setup of just one process application, should I take care of something else in a cluster environment?
I'm not ruling out using only one tenant, one process engine and handle everything related to tenants logically within my app, but I understand that this can be very (VERY!) cumbersome.
All answers are welcome, hopefully we'll achieve a good approach to this problem.
1. Should I let camunda manage my users/groups, or better manage this on my app? In this case, can I say to Camunda “User X completed Task Y”, even if camunda does not know about the existence of user X?
Yes, you can choose your app to manage the users and tell Camunda that a task is completed by a user whom Camunda doesn't know about. And same way, you can make Camunda to assign task to users which it doesn't know at all. This is done by implementing their org.camunda.bpm.engine.impl.identity.ReadOnlyIdentityProvider interface and let the configuration know about your implementation.
PS: If you doesn't need all the application that comes with Camunda, I would even suggest you to embed the Camunda engine in your app. It can be done easily and they have good documentation for thier java APIs. And it is easily achievable.
2. What about dynamic multi tenancy? Is it possible to create tenants on the fly and make those tenants persist over time even after restarting the application server? What about re-deployment of processes after restarting?
Yes. Its possible to dynamically add Tenants. While restarting the engine or your application, you can either choose to redeploy / or just use the existing deployed processes. Even when you redeploy a process, if you want Camunda to create a new version of the process only if there is a change in the process, that's also possible. See enableDuplicateFiltering property for their DeploymentBuilder.
3. After which point should I think on partitioning of engines on nodes? It’s hard to figure out how I’m going to do this with dynamic multi tenancy, but moreover... Is this the right way to deal with high workload and growing number of tenants?
In my experience, it is possible. You need to keep track of various parameters here, like memory, number of requests being served, number of open connections available etc., then accordingly add more or remove nodes. With AWS, this will be much easier as they have some of these tools already available for dynamic scaling in / out nodes. But that said, I have done this only with Camunda as embedded engine application(s).

How do you distribute your app across multiple servers using EC2?

For the first time I am developing an app that requires quite a bit of scaling, I have never had an application need to run on multiple instances before.
How is this normally achieved? Do I cluster SQL servers then mirror the programming across all servers and use load balancing?
Or do I separate out the functionality to run some on one server some on another?
Also how do I push out code to all my EC2 windows instances?
This will depend on the requirements you have. But as a general guideline (I am assuming a website) I would separate db, webserver, caching server etc to different instance(s) and use s3(+cloudfont) for static assets. I would also make sure that some proper rate limiting is in place so that only legitimate load is on the infrastructure.
For RDBMS server I might setup a master-slave db setup (RDS makes this easier), use db sharding etc. DB cluster solutions also exists which will be more complex to setup but simplifies database access for the application programmer. I would also check all the db queries and the tune db/sql queries accordingly. In some cases pure NoSQL type databases might be better than RDBMS or a mix of both where the application switches between them depending on the data required.
For webserver I will setup a loadbalancer and then use autoscaling on the webserver instance(s) behind the loadbalancer. Something similar will apply for app server if any. I will also tune the web servers settings.
Caching server will also be separated into its on cluster of instance(s). ElastiCache seems like a nice service. Redis has comparable performance to memcache but has more features(like lists, sets etc) which might come in handy when scaling.
Disclaimer - I'm not going to mention any Windows specifics because I have always worked on Unix machines. These guidelines are fairly generic.
This is a subjective question and everyone would tailor one's own system in a unique style. Here are a few guidelines I follow.
If it's a web application, separate the presentation (front-end), middleware (APIs) and database layers. A sliced architecture scales the best as compared to a monolithic application.
Database - Amazon provides excellent and highly available services (unless you are on us-east availability zone) for SQL and NoSQL data stores. You might want to check out RDS for Relational databases and DynamoDb for NoSQL. Both scale well and you need not worry about managing and load sharding/clustering your data stores once you launch them.
Middleware APIs - This is a crucial part. It is important to have a set of APIs (preferably REST, but you could pretty much use anything here) which expose your back-end functionality as a service. A service oriented architecture can be scaled very easily to cater multiple front-facing clients such as web, mobile, desktop, third-party widgets, etc. Middleware APIs should typically NOT be where your business logic is processed, most of it (or all of it) should be translated to database lookups/queries for higher performance. These services could be load balanced for high availability. Amazon's Elastic Load Balancers (ELB) are good for starters. If you want to get into some more customization like blocking traffic for certain set of IP addresses, performing Blue/Green deployments, then maybe you should consider HAProxy load balancers deployed to separate instances.
Front-end - This is where your presentation layer should reside. It should avoid any direct database queries except for the ones which are limited to the scope of the front-end e.g.: a simple Redis call to get the latest cache keys for front-end fragments. Here is where you could pretty much perform a lot of caching, right from the service calls to the front-end fragments. You could use AWS CloudFront for static assets delivery and AWS ElastiCache for your cache store. ElastiCache is nothing but a managed memcached cluster. You should even consider load balancing the front-end nodes behind an ELB.
All this can be bundled and deployed with AutoScaling using AWS Elastic Beanstalk. It currently supports ASP .NET, PHP, Python, Java and Ruby containers. AWS Elastic Beanstalk still has it's own limitations but is a very cool way to manage your infrastructure with the least hassle for monitoring, scaling and load balancing.
Tip: Identifying the read and write intensive areas of your application helps a lot. You could then go ahead and slice your infrastructure accordingly and perform required optimizations with a read or write focus at a time.
To sum it all, Amazon AWS has pretty much everything you could possibly use to craft your server topology. It's upon you to choose components.
Hope this helps!
The way I would do it would be, to have 1 server as the DB server with mysql running on it. All my data on memcached, which can span across multiple servers and my clients with a simple "if not on memcached, read from db, put it on memcached and return".
Memcached is very easy to scale, as compared to a DB. A db scaling takes a lot of administrative effort. Its a pain to get it right and working. So I choose memcached. Infact I have extra memcached servers up, just to manage downtime (if any of my memcached) servers.
My data is mostly read, and few writes. And when writes happen, I push the data to memcached too. All in all this works better for me, code, administrative, fallback, failover, loadbalancing way. All win. You just need to code a "little" bit better.
Clustering mysql is more tempting, as it seems more easy to code, deploy, maintain and keep up and performing. Remember mysql is harddisk based, and memcached is memory based, so by nature its much more faster (10 times atleast). And since it takes over all the read load from the db, your db config can be REALLY simple.
I really hope someone points to a contrary argument here, I would love to hear it.

DB Server Requirements Advice

I am building a MySQL database with a web front end for a client. The client and their staff will use this webapp on a daily basis, creating anywhere from a few thousand, to possibly a few hundred thousand records annually. I just picked up a second client who wishes to have the same product and will probably be creating the same number of records annually, possibly more.
In the future I hope to pick up a few more clients. In the next few years I could have up to 5 databases & web front ends running for 5 distinct clients, all needing tight security while creating, likely, millions of records annually (cumulatively across all the databases).
I would like to run all of this with Amazon's EC2 service but am having difficulty deciding on what type of instance to run. I am not sure if I should have several distinct Linux instances, one per client, or run one "large" instance which would manage all the clients' databases and web front ends.
I know that hardware configuration is rather specific to the task at hand. The web front ends will be using JQuery to make MySQL queries "pretty" and I will likely be doing some graphing of data (again with JQuery). The front ends will be using SSL for security, which I understand can add some overhead to the network speed.
I'm looking for some of your thoughts on this situation.
Thanks
Use the tools that are available. The Amazon RDS service lets you run a MySQL database in the cloud with no extra effort. You can scale it up and down as you need - start small, and then as you hit your limits, add extra capacity (at extra cost).
Next, use Elastic Load Balancing (ELB) with an SSL certificate, so you offload the overhead of SSL decryption to an Amazon service.
If you're using Java for your webapp, you could use Elastic Beanstalk to handle the whole hosting process for you.
Don't be afraid to experiment - you can always resize instances with no data loss (if they boot from an EBS volume) and you can always create and delete instances. Scaling horizontally is often better than scaling vertically, as you can spread your instances across multiple Availability Zones.
Good luck!

Resources