Good day,
I will begin developing a Web API solution for a multi-company organization. I'm hoping to make available all useful data to any company across the organization.
Given that I expect there to be a lot of growth with this solution, I want to ensure that it's organized properly from the start.
I want to organize various services by company, and then again by application or function.
So, with regards to the URL, should I target a structure like:
/company1/application1/serviceOperation1
or is there some way to leverage namespaces:
/company2.billing/serviceOperation2
Is it possible to create a separate Web API project in my solution for each company? Is there any value in doing so?
Hope we're not getting too subjective, but the examples I have seen have a smaller scope, and I really see my solution eventually exposing a lot of Web API services.
Thanks,
Chris
Before writing a line of code I would be looking at how the information is to be secured and deployed, versioned and culture of the company.
Will the same security mechanisms (protocols, certificates, modes, etc.) be shared across all companies and divisions?
If they are shared then there is a case for keeping them in the same solution
Will the services cause differing amounts of load and be deployed onto multiple servers with different patching schedules?
If the services are going onto different servers then they should probably be split to match
Will the deployment and subsequent versioning schedule be independent for each service or are all services always deployed together?
If they are versioned independently then you would probably split the solution accordingly
How often does the company restructure and keep their applications?
If the company is constantly restructuring without you would probably want to split the services by application. If the company is somewhat stable and focused on changing the application capabilities then you would probably want to split the services by division function (accounts, legal, human resources, etc.)
As for the URL to access this should naturally flow from the answers above. Hope this helps.
Related
I'm a mobile/front-end developer and need help with the architecture on the back-end where I'm totally green. I'm building web and mobile front in Flutter that will communicate with the server written in GO. Based on the config file attached the Flutter front I will create few separate apps, but for every single app I need a separate instance of the back-end services or at least separate database.
My question is about what architecture I should use in terms of future scaling to lower the server maintenance costs while having the best performance. Correct me if I'm wrong because what I will write is the image of my understanding of the structure but based on what I wrote above - am I correct that I should use some load balancer with the business logic spread across Kubernetes instances and only have separate database for every single Flutter app? Or is there any other solution I'm unaware about? Any help or guides that will at least lead me to more knowledge I can learn would be much appreciated.
I don't know yet whether it's a perfect solution but I will leave it if someone in future will be looking for it. My friend who codes in PHP introduced me to the multi tenant architecture pattern and after I've researched it I find it a good solution to what I've been looking for.
For example,
You have an IT estate where a mix of batch and real-time data sources exists from multiple systems, e.g. ERP, Project management, asset, website, monitoring etc.
The aim is to integrate the datasources into a cloud environment (agnostic).
There is a need for reporting and analytics on combinations of all data sources.
Inevitably, some source systems are not capable of streaming, hence batch loading is required.
Potential use-cases for performing functionality/changes/updates based on the ingested data.
Given a steer for creating a future-proofed platform, architecturally, how would you look to design it?
It's a very open-end question, but there are some good principles you can adopt to help direct you in the right direction:
Avoid point-to-point integration, and get everything going through a few common points - ideally one. Using an API Gateway can be a good place to start, the big players (Azure, AWS, GCP) all have their own options, plus there's lots of decent independent ones like Tyk or Kong.
Batches and event-streams are totally different, but even then you can still potentially route them all through the gateway so that you get the centralised observability (reporting, analytics, alerting, etc).
Use standards-based API specifications where possible. A good REST based API, based off a proper resource model is a non-trivial undertaking, not sure if it fits with what you are doing if you are dealing with lots of disparate legacy integration. If you are going to adopt REST, use OpenAPI to specify the API's. Using this standard not only makes it easier for consumers, but also helps you with better tooling as many design, build and test tools support OpenAPI. There's also AsyncAPI for event/async API's
Do some architecture. Moving sh*t to cloud doesn't remove the sh*t - it just moves it to the cloud. Don't recreate old problems in a new place.
Work out the logical components in your new solution: what does each of them do (what's it's reason to exist)? Don't forget ancillary components like API catalogues, etc.
Think about layering the integration (usually depending on how they will be consumed and what role they need to play, e.g. system interface, orchestration, experience APIs, etc).
Want to handle data in a consistent way regardless of source (your 'agnostic' comment)? You'll need to think through how data is ingested and processed. This might lead you into more data / ETL centric considerations rather than integration ones.
Co-design. Is the integration mainly data coming in or going out? Is the integration with 3rd parties or strictly internal?
If you are designing for external / 3rd party consumers then a co-design process is advised, since you're essentially designing the API for them.
If the API's are for internal use, consider designing them for external use so that when/if you decide to do that later it's not so hard.
Taker a step back:
Continually ask yourselves "what problem are we trying to solve?". Usually, a technology initiate is successful if there's a well understood reason for doing it, which has solid buy-in from the business (non-IT).
Who wants the reporting, and why - what problem are they trying to solve?
As you mentioned its an IT estate aka enterprise level solution mix of batch and real time so first you have to identify what is end goal of this migration. You can think of refactoring applications. If you are trying to make it event driven then assess the refactoring efforts and cost. Separation of responsibility is the key factor for refactoring and migration.
If you are thinking about future proofing your solution then consider Cloud for storing and processing your data. Not necessary it will be cheap but mix of Cloud and on-prem could be a way. There are services available by cloud providers to move your data in minimal cost. Cloud native solutions are there for performing analysis on your data. Database migration service in AWS or Azure can move data and then capture on-going changes. So you can keep using on-prem db & apps and perform analysis for reporting on cloud. It will ease out load on your transactional DB. Most data sync from on-prem to cloud is near real time.
I know this question was already asked but I could not find a satisfying answer.
I started to dive deeper in building a real restful api and I like it's contraint of using links for decoupling. So I built my first service ( with java / spring ) and it works well ( although I struggled a bit with finding the right format but that's another question ). After this first step I thought about my real world use case. Micorservices. Highly decoupled individual services. So I made a my previous scenario and I came to some problems or doubts.
SCENARIO:
My setup consists of a reverse proxy ( Traefik which works as service discovery and api gateway) and 2 Microservices. In addition, there is an openid connect security layer. My services are a Player service and a Team service.
So after auth I have an access token with the userId and I am able to call player/userId to get the player information and teams?playerId=userId to get all the teams of the player.
In my opinion, I would in both responses link the opposite service. The player/userId would link to the teams?playerId=userId and vice versa.
QUESTION:
I haven't found a solution besides linking via a hardcoded url. But this comes with so many downfalls as I can't imagine that this a solution used in real world applications. I mean just imagine your api is a bit more advanced and you have to link to 10 resources. If something changes, you have refactor and redeploy them all.
Besides the synchonization problem, how do you handle state in such a case. I mean, REST is all about state transfer. So I won't offer the link of the player to teams service if the player is in no team. Of course I can add the team ids as attribute to the player to decide whether to include the link or not. But this again increases coupling between the services.
The more I dive in the more obstacles I find and I'm about to just stay with my spring rest docs and neglect the core of Rest which I is a pity to me.
Practicable for a microservice architecture?
Fielding, 2000
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.
Fielding 2008
REST is intended for long-lived network-based applications that span multiple organizations.
It is not immediately clear to me that "microservices" are going to fall into the sweet spot of "the web". We're not, as a rule, tring to communicate with a microservice that is controlled by another company, we often don't get a lot of benefit out of caching, or code on demand, or the other rest architectural constraints. How important is it to us that we can use general purpose components to exchange information between different microservices within our solution? and so on.
If something changes, you have refactor and redeploy them all.
Yes; and if that's going to be a problem for us, then we need to invest more work up front to define a stable interface between the two. (The fact that we are using "links" isn't special in that regard - if these two things are going to talk to each other, then they are going to need to speak a common language; if that common language needs to evolve over time (likely) then you need to build those capabilities into it).
If you want change over time, then you have to plan for it.
If you want backwards/forwards compatibility, then you have to plan for it.
Your identifiers don't need to be static - there are lots of possible ways of deferring the definition of an identifier; the most obvious being that you can use another identifier to look up the identifier you want, or the formula for calculating it, or whetever.
Think about how Google works - the links they use change all the time, but it doesn't matter because the protocol (refresh your bookmarked search form, enter your text in "the" one field, click the button) hasn't changed in 20 years. The interface is stable (even though the underlying spellings of the identifiers is not) and that's enough.
I'm totally new to the concept of microservices and AWS serverless architecture. I Have a project that I have to divide it into microservices that should run on AWS Lambda but I face a difficulty on how to design it.
When searching I could not find a usefull documentation about how to divide and design microservices, all the docs I saw comparing monolithic app to microservices app or deploying microservice on aws lambda.
In my case I have to develop an ERP (Entreprise Resource Planning) that have to manage clients, manage stocks, manage books, manage commands.. so should I make a service for clients and a service for books... and then if I notice a lot of dependency between two microservices then I make them one ??
And for the DB, is it good to use one DB ( dynamoDB) for all microservices instead of a DB for every service in this case (ERP)?
Any help is really appreciated.
If anybody has a usefull document that can help me, I will be very thakfull.
Thanks a lot.
I think the architecture of your data and services can depend on a few things:
Which data sources are used/available
What your requirements/desired functionalities are
Business logic or any other restrictions/concerns
In order to reduce the size of a service, we want to limit the reasons why an application or another service would access that service to as few as possible. This reduces the amount of overall maintenance for managing it and also gives you a lot of flexibility when using their deployments.
For example: A service which transforms data from multiple sources and makes it available via API can split into an API using data processing service with a new, cleaner data source. This would prevent overreliance on large, older services and data, and make integration of that newer, smaller service easier for your applications.
In your scenario, you may get away with having services for managing clients, books, and stocks separately, but it also depends on how your data sources are integrated as well as what services are already available to you. You may want to create other microservices or databases to help reduce the size and organize the data into the format you want.
Depending on your business needs, combining or keeping separate two microservices can depend on different things too. Does one of these services have the potential to be useful for other applications? Is it dedicated to a specific project? Keeping services separate, small, and focused gives you room to expand or shrink things if needed. The same goes for data sources.
There are always many ways to approach it. Consider what your needs/problems are first before opting with a certain tool for creating solutions.
Microservices
It's simply small services running that can be scaled and deployed
independently.
AWS Serveless
Every application is different so you may not find single architecture that fits every application. Generally a simplge Serverless application consists of Lambda Function , Api Gateway , DB (SQL/NoSQL) . Serverless is great cloud native choice and you get availability , scalability out of box and you would be very quick to deploy your stuff.
How to design Serverless Application
There is no right answer. You need to architect your system in a way that individual microservices can work cohesively. in your case books, Stocks need to be separate microserivces which means they are separate Lambda functions. For DB , Dynamo is good and powerful choice as long as you know how NoSQL works and caveats around it. you need to think before hand what are challenges around NoSQL and how you would partition data ? What if you need to use the complex reporting and would NoSQL be good choice ? There are patterns around to get away that issue. Since Dynamo DB operate on table level so each microservice will preferably be separate table that can be scaled independently and makes more sense.
What's the right architecture for my application?
Instead of looking for one right answer i would strongly suggest to read individual component before making your mind. There are tons of articles and blogs. If i was you i would look in the following order
Microservices - why we need then ?
Serverless - In General
Event Driven architecture
SQL vs NoSQL
Lambda and DynamoDB and how they actually work
DevOps and how that would work in serverless
Patterns
Once you have bit of understanding you would be in way better position to decide what suits you best.
I have an web application that is separated in several components. For some reasons (pricing) I'm considering to deploy future components in different clouds.
Does anybody has references and experience on this to tell me if this is definitely not good? I know that components being in different networks will decrease the performance. At the same time, I do not like the idea of losing the power of choice where the new components will be.
Must Microservices based systems be all in the same network? How do you handle this problem?
Having worked with multiple services in the past I can tell you that services are made to work across separate networks. This is why there are security protocols like CAS, SAML, OAUTH, HTTPS, and HMAC to name a few.
So as long as you are able to deal with the management of the networks, and you have good security around your services (and I assume you do), then I would not be worried about breaking some unspoken microservices rule. Remember that microservices, if written well and are useful, are expected to be used across the Internet, especially for the Internet of Things, so they are expected to be used across multiple networks.
When you start trying this, I would pay very close attention to the bandwidth charges. AWS as an example you are ok if you are in the same region. Bandwidth between services will not cost much if anything. Lets say you use AWS and Google Cloud. Now you will be paying for the bandwidth between the 2 providers.
As a suggestion I would look at Docker as a possible solution to your problem/concern of vendor lock in.
You would be restricted to providers that support docker but in theory you could migrate quickly between providers easily since your application would be abstracted from each cloud providers architecture.
Performance, will take a hit with anything leaving the providers data center. I suppose with some investigation you might try researching providers that use a common internet exchange. This would help minimize a few hops at least.