Solution other than multiple instances of Geoserver? - geoserver

Im working on an aplication that's supposed to have multiple subdomains for multiple regions with geospatial data to be served saparately for each region. It should be hosted on one server hosting.
Demo of the app is nadlanu.gromatic.hr and opcina.gromatic.hr for other region.
I'm having problem to separate layer "Komunalni problemi" for the two regions as when the aplication is submitted (rigt menu - "Predaj prijavu") in opcina.gromatic.hr it shows only in nadlanu.gromatic.hr.
I've created separate layer with a different store (with different database) and workspace in Geoserver for that but, obviously, that doesn't work. So, I came to knowledge (correct me if I'm wrong please) that I need multiple instances of geoserver to solve this issue, but that wont work either due to one server hosting limitation, as I need more than 20 separated subdomains with separated geospatial data.
Thank you for the answers!

Related

How to separate different parts of laravel application?

I have a huge Laravel application, it contitutes of a dashboard where users have many different complex cruds that are all saved in a database with more than 100 tables, it also have an api for mobile app that can reach a peack of 300 thousand requests per minute. As the app scales I'm having issues with performance, as all is in one single aws hosted ec2 server, by all I mean all app images, company logos etc, all the resources for the dashboard and all the api for mobile app. I need a solution for this problem, should I separate all in different machines? If so, how?
All the app is currently running PHP 7.2 and Laravel 5.5 on a aws ec2 12xlarge instance.
You are asking us some basic concept of scalability in the Cloud.
I will try to give one direction you could follow.
The current design is very bad for couple of reasons:
As you said, it cannot scale because everything hold in one server;
Because everything hold in one server, I hope you have automated backup in case your instance fails
The only thing you can do in this configuration is to scale vertically, instead of horizontally (using more instances instead of a big one)
Files are on the same disk, so you cannot scale
In term of application (Laravel), you are running with a monolith: everything in one app. I don't need to tell you that it doesn't scale well but it can.
Lets dive into to main topic: How to scale this big fat instance?
First of all, you should use a shared space for your images. There are NFS (expensive), S3 (cheap) and shared EBS (cheaper than NFS, but can only be used by a limited number of instances at a time). I would use S3.
We can skip the part where you need to refactor your monolith application to a micro-service architecture, with smaller parts. It can be done if you have time and money, but I would say it is not the priority to your scaling issue.
I don't know if the database is also on the same EBS or not. If it is, use RDS: it is an almost no management managed database. You can have multi-AZ for very high availability, or Multi-AZ DB Cluster (new) which will spread the load for reads into 2 shadow instances.
To go further with your application, you can also run mobile and web on separated instances, to avoid one impacting the other.
And...That's all! Laravel has a transparent configuration mechanism for the storage to easily switch from one to another.
When I say "That's all", I mean in term of way to improve the scaling.
You will have to migrate the data from the EC2 database to RDS, perform the transfer of your images from the EBS to S3, create an autoscaling group, create an IAM Instance role for your EC2 Autoscaling group to access S3, know when the application has peaks so you can do a predictive scaling, etc.
I would recommand using IaC for this, like CloudFormation or Terraform.
This is the tip of the iceberg, but I hope you can start building a more robust system with these tips.

spring boot multiple microservices with one database

I know there are many questions like this and almost all answers are No. And the reason is a single microservice should be independent of another one. And if there is a change in a table, all microservices using that table need to be changed.
But my question is, if my database structure is fixed (hardly there will be any change in the table structure) will it be a good idea of creating multiple microservices pointing to same database.
Okay... here is my project.
We are going to a migrate struts 1.3/EJB 2.0 project to Angular/microservices. This project has 5 different modules and each module is a huge one. And this project is there in production since past 13 years. So there is very little chance of changing the table structures.
The reason I want to make different microservices is, since each modules are huge and complicated, and we still get requirements to add/change the business logics. So in that case, I can deploy only one microservice.
Any suggestions please.
I suggest creating a new service that access that database and all other services communicate with this service instead of directly to the database.
If you don't want to create a new service, at least access the DB using some database
abstraction layer.
For example, in SQL server use views and store procedures instead of directly access the tables.

Separating different parts of the project in Git

How can I efficiently separate different parts of the project in Git? I have a Laravel web application that includes admin panel + API for Mobile app to increase performance. I thought it would be a good idea to separate the admin part from the API to disable a service provider in API and even run the admin panel on a different server (connect to the database via remote MySQL) and dedicate a server API. How can I separate these parts without duplicating changes that I make in common parts like models? I thought of creating them as two branches in a Git repository. Is there a better way to do this separation or the whole optimization that is easier to maintain?
Update: The issue I'm facing is the response time. I put the following code into my routes, and it takes 400-600ms to respond.
Route::any('/test2', function()
{
return "test";
});
I tested it on two different servers, and the configuration is good enough, I think (10GB ram - 4 CPU core 3.6Ghz). By the way, I have less than 1k requests per hour for now, and soon I'm looking at 5k-20k at most.
I think dividing your source code into modules is good enough. Give a look to Laravel Module
I will suggest you to do as the creator of the Framework (Taylor): Packages and use Composer.
In the Laravel community, you have many Packages available like Horizon, Nova, Telescope, Spatie/* etc.
If you want to add them you just have to add a Composer dependency and it just work out of the box.
You can do the same with your code that will be in both project like Models etc.
Every Package has its own Git repo.
This is a more Laravel way to do it than separate into Module (compared to Symphony world). Laravel doesn't come with Modules at its core.
Now about separating projects:
As i read your need, i am not sure you will have performance issue if you run the API and the admin panel on the same project unless you have millions of http calls per hours.
I am currently working on a project with a lot of code for the client side, we also have an api with thousands of call per hours and everything is fine. We also run Nova for the internal backend.
You should consider that when you will have those scale problem, you will probably have database problem too and maybe servers problems (bandwith, memory, cost etc).
To be scalable 100% is not an easy task.
In my opinion, when you face it, solve it. Separating the Api/admin pannel at the beginning could be too much overhead to start/maintain a project.

Multi Domains in One Database

I have One Database with one domain. But my Database have 3 Websites available. I want my 2nd Website for publish in that Database. Is that possible ???
You might want to make sure that you're not violating the terms of service with the company who is hosting your database. Having many outside domains hitting an inside database may cause some undue stress on that server that the company is not counting on or eating up more bandwidth that is allotted for that machine.
In the same breath though, if you setup some type of data layered web service which you can connect to, then your many other domains are not directly hitting the database and do essentially the same thing, but in a more ordered fashion of predictable database calls. This may not be what you're looking for, but if setup correctly it could make developing against your database much easier.

DB Server Requirements Advice

I am building a MySQL database with a web front end for a client. The client and their staff will use this webapp on a daily basis, creating anywhere from a few thousand, to possibly a few hundred thousand records annually. I just picked up a second client who wishes to have the same product and will probably be creating the same number of records annually, possibly more.
In the future I hope to pick up a few more clients. In the next few years I could have up to 5 databases & web front ends running for 5 distinct clients, all needing tight security while creating, likely, millions of records annually (cumulatively across all the databases).
I would like to run all of this with Amazon's EC2 service but am having difficulty deciding on what type of instance to run. I am not sure if I should have several distinct Linux instances, one per client, or run one "large" instance which would manage all the clients' databases and web front ends.
I know that hardware configuration is rather specific to the task at hand. The web front ends will be using JQuery to make MySQL queries "pretty" and I will likely be doing some graphing of data (again with JQuery). The front ends will be using SSL for security, which I understand can add some overhead to the network speed.
I'm looking for some of your thoughts on this situation.
Thanks
Use the tools that are available. The Amazon RDS service lets you run a MySQL database in the cloud with no extra effort. You can scale it up and down as you need - start small, and then as you hit your limits, add extra capacity (at extra cost).
Next, use Elastic Load Balancing (ELB) with an SSL certificate, so you offload the overhead of SSL decryption to an Amazon service.
If you're using Java for your webapp, you could use Elastic Beanstalk to handle the whole hosting process for you.
Don't be afraid to experiment - you can always resize instances with no data loss (if they boot from an EBS volume) and you can always create and delete instances. Scaling horizontally is often better than scaling vertically, as you can spread your instances across multiple Availability Zones.
Good luck!

Resources