CodeIgniter: distributed production deployment - codeigniter

Couldn't find any references on how to deploy a CodeIgniter application distributedly: One (or many, through a load balancer) machine to views + controllers; samething for the Model layer.
Does codeigniter offer a easy setup on this, or will I have to set this up on my own?
Any thoughts appreciated :)

Your HTTP load balancer should handle which server running your codeigniter application should serve the incoming request. All your views, models and controllers will be replicated across all servers, but will communicate with one single data store (eg: mysql db). A unique session between the client and your server will maintain synchronicity between which server instance the client is being served from.
I don't think you really need to worry about creating a soap layer, unless each of your application servers are maintaining their own Databases locally, and then synchronizing to a master DB or among themselves.

Related

Distributed caching with nhibernate orm

I am trying to implement caching in my application.
We are using Oracle database, asp.net web api to serve data to ui.
Api calls take more time, so we are thinking of implementing caching. Our code is deployed on 2 servers with load balancers.
How caching should be implemented.
What i am planning to implement is,
There should be a service API on any server, this api will store all data in memory. Ui will call our existing API, hit can go to any node, this api then will get data from new api(cache) and serve it to ui.
Is this architecture correct for distruted caching.
Can any one share their experience or guidance to implementation?
You might want to check NCache. Being a distributed caching solution, it provides first class support for sharing cache data between multiple clients due to the ache process running autonomously outside the address of any one application address space.
For your case, every web server in your load-balanced web farm will have be the client of NCache and have direct access to the cache servers. All the web servers,being clients to a central caching solution, will see the same cache data through simple-to-use NCache APIs. Any modification through insert, update or delete cache operations will be immediately observable to all the web servers.
The intelligence driving NCache allows for a seamless behind-the-scenes handling of all the tasks of storing and distributing the cache data among multiple cache server nodes on which the cache instance is distributed.
Furthermore, all the caching operations are completely independent of the framework used for database content retrieval and can be applied equally well with NHibernate, EF, EF Core and, of course, ADO.NET.
You can find more information about how to integrate NCache into your web farm environment and much more by using the following link:
http://www.alachisoft.com/resources/docs/ncache/admin-guide/ncache-architecture.html

Avoid bottlenecks in microservices

I'm going to apply Microservices for my Datawarehouse application. There are 4 main Microservices in application:
1) Data Service: Import/Export external data sources to DWH and Query data from DWH.
2) Analytics Service: for chart visualization on UI
3) Machine Learning: for recommendation system
4) Reports: for report generating
The diagram as below:
Each service has its own DB and they communicate directly with each other via TCP and Thift serialization. The problem here is Data Service suffer a high load from other services and can become a SPOF of application. Data in DWH is big too (maybe up to hundred miliions of records). How to avoid the bottlenecks for Data Service in this case? Or How do I define a properly bounded context to avoid the bottlenecks?
You may think about
splitting Data Service into few microservices, based on some business logic;
modify Data Service (if needed) to support more than one instance of service. Then use the load balancer to split requests between those instances.
A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications.
Regarding "One database, multiple services":
Each microservice need to have own data storage, otherwise, you do not have a decomposition. If we are talking about relation database, then this can be achieved using one of the following patterns:
Private tables per Service – each service owns a set of tables that must only be accessed by that service
Schema perService – each service has a database schema that’s private to that service
Database per Service – each service has it’s own database.
If your services using separate tables from Data Warehouse database and Data Service only provides access layer to database without any additional processing logic, then yes, you may remove Data Service and move data access logic to corresponding services. But think on another hand - right now you have only one place (Data Service), that knows how to access and manipulate with Data Warehouse that is what microservices are about.

What technology to use to avoid too many VMs

I have a small web and mobile application partly running on a webserver written in PHP (Symfony). I have a few clients using the application, and slowly expanding to more clients.
My back-end architecture looks like this at the moment:
Database is Cloud SQL running on GCP (every client has it's own
database instance)
Files are stored on Cloud Storage (GCP) or S3 (AWS), depending on the client. (every client has it's own bucket)
PHP application is running in a Compute Engine VM (GCP), (every client has it's own VM)
Now the thing is, in the PHP code, the only thing client specific is a settings file with the database credentials and the Storage/S3 keys in it. All the other code is exactly the same for every client. And mostly the different VMs sit idle all day, waiting on a few hours usage per client.
I'm trying to find a way to avoid having to create and maintain a VM for every customer. How could I rearchitect my back-end so I can keep separate Databases and Storage Buckets per client, but only scale up my VM's when capacity is needed?
I'm hearing alot about Docker, was thinking about keeping db credentials and keys in a Redis DB or Cloud Datastore, was looking at Heroku, AppEngine, Elastic Beanstalk, ...
This is my ideal scenario as I see it now
An incoming request is done, hits a load balancer
From the request, determine which client the request is for
Find the correct settings file, or credentials from a DB
Inject the settings file in an unused "container"
Handle the request
Make the container idle again
And somewhere in there, determine based on the the amount of incoming requests or traffic, if I need to spin up or spin down containers to handle the extra or reduced (temporary) load.
All this information overload has me stuck, I have no idea what direction to choose, and I fail seeing how implementing any of the above technologies will actually fix my problem.
There are several ways do it with minimum efforts:
Rewrite loading of config file depending from customer
Make several back-end web sites on one VM (best choice i think)

Does HttpRuntime.Cache work in Load Cluster enviornment?

Does anyone have knowledge if the HttpRuntime.Cache works in load cluster enviornment? and how to implement them?
What do you mean by Load Cluster Environment? Do you refer to that the content of the cache should be the same on all members of the cluster?
If that is the case this can be done in two ways
Use Sql server to serialize the cache and let all the servers point to the same sql-instance
Use ASPNet state server, which is a windows service asp.net will talk to over tcp. Again, let all your webserver instances point to the same state server.
Another approach is not to use the HttpRuntime.Cache but implement your own cacheprovider and use technologies like memcached.

Is there a name for this architectural pattern?

Suppose you have a web site consisting of:
A web server serving your various users requests
A DB for persistence
A separate server asynchronously doing background stuff - preparing the data on the DB, updating it according to changes, etc. - regardless of what's going on in the main server.
You can easily translate this into another world and talk about threads for example - i.e. having one thread preparing the data asynchronously for a main thread which is running.
Is there a name for this pattern, if it is a pattern at all?
Are there any pros/cons for this method of processing data in terms of performance?
Let me just clarify that I'm asking specifically about the second web server doing background processing, and not the whole architecture.
Everything has patterns. If you've seen it more than twice, there's a pattern.
You've got three examples of Client-Server.
You've got Browser-Web Server.
You've got web server in the role of DB Client talking to DB Server.
You've got web server in the role of App server client talking to an App Server.
Sometimes folks like to call this N-Tier since there are at three tiers of Browser-Web Server-DB Server, plus an additional application server tier.
Some folks expand this into the Services Bus concept. Your web server uses DB server and application server.
The Asynchronous Back-End and Back-end Server are names I've heard to describe your application server architecture.
I don't have a pattern name for you (not to say there isn't one), but what you have here is an optimization to keep your main thread from responding slowly to requests. It doesn't have to calculate data, it just has to provide it.
This is similar to UI coding. You don't do any work on your UI thread, you just draw. Other threads should be responsible for figuring everything else out, so your UI is responsive.
I don't know if it's officially a pattern name but this looks almost like batch processing from the mainframe days.

Resources