Best way to audit of REST API - api-gateway

I am planning to implement audit trail of my incoming HTTP request .
I was thinking of having some kind of gateway which will redirect every incoming call to 2 server i.e. one is where actual logic written and second will be pass each incoming request to log server application which basically log all api request.I just wanted to know the best way in terms of performance to achieve this requirement.
I am ok if we can do with nginx as well.

It's not clear if you need to have a second server/application because the process of logging this stuff is especially complex for you, or if you're just offering that as an example solution to the basic problem of logging stuff.
If you just meant it as an example, then it's probably overkill. Nginx's logging is configurable; getting it to serve your needs would be my first suggestion.
If your needs are more complex, then you'll have to explain them in more detail. You may also find that your question is more on-topic for Software Engineering (for advice about how to structure a custom solution) or Software Recommendations (for suggestions about existing solutions).

Related

why rate limiting logic should be placed with application code rather then web server

I am exploring to put rate limiting functionality on rest API which are developed using spring boot.
After going through many articles, I came to know that the best way to put rate limiting functionality is with application code, rather then putting it on web servers.
My question is how do you decide that which functionality should go where. Since, its monitoring your incoming calls and nothing to do with business logic, the ideal place should be a web server.
My question is how do you decide that which functionality should go
where. Since, its monitoring your incoming calls and nothing to do
with business logic, the ideal place should be a web server.
Technically the web server could do the job but in the facts, a web server doesn't have necessarily all needed information, it is not specialized for API consuming and it may also make the testability of this feature much harder.
Some practical reasons why the webserver side could be a bad choice :
the developers don't have necessarily the configuration of the HTTP web server in local.
you want to write unit and integration test to check that the rate limitations are applied as specified. Creating a configuration for automated testing is much simpler in the scope of your Java application than with a configuration file defined on a web server.
web servers reasons in terms of HTTP request-response, not in terms of service.
Rate limitations may be applied according to the IP but not only, the username, the user roles, the type of service may influence the limitations. Not sure that you could get all of these easily from an HTTP server.
For example roles are stored on the server side or in a database.
A better option is setting these mechanisms by adding specific and specialized classes or configuration files, which simplifies their reading, their maintenance and their testability.
As you mention Spring Boot in your tags, that and that should interest you.
I recommend spring-cloud-gateway's rate limiter
you could separate this functionality from your business logic by using Filters.
https://www.baeldung.com/spring-boot-add-filter

Microservice requests

I'm trying to start a little microservice application, but I'm a little bit stuck on some technicalities.
I'm trying to build an issue tracker application as an example.
It has 2 database tables, issues and comments. These will also be separate microservices, for the sake of the example.
It has to be a separate API that can be consumed by multiple types of clients e.g. mobile, web etc..
When using a monolitic approach, all the codebase is coupled together, and when making a request to let's say the REST API, I would handle for example the '/issues/19' request
to fetch the issue with the id '19' and it's corresponding comments by means of the following pseudocode.
on_request_issue(id) # handler for the route '/issues/<id>'
issue = IssuesModel.findById(id)
issue.comments = CommentsModel.findByIssueId(id)
return issue
But I'm not sure on how I should approach this with microservices. Let's say that we have microservice-issues and microservice-comments.
I could either let the client send a request to both '/issues/19' and '/comments/byissueid/19'. But that doesn't work nice in my point of view, since if we're having multiple things
we're sending alot of requests for one page.
I could also make a request to the microservice-issues and in that one also make a request to the microservice-comments, but that looks even worse to me than the above, since from what
I've read microservices should not be coupled, and this couples them pretty hard.
So then I read about API gateways, that they could/should receive a request and fan out to the other microservices but then I couldn't really figure out how to use an API gateway. Should
I write code in there for example to catch the '/issues/19' request, then fan out to both the microservice-issues and microservice-commetns, assemble the stuff and return it?
In that case, I'm feeling I'm doing the work double, won't the API gateway become a new monolith then?
Thank you for your time
API gateway sounds like what you need.
If you'll keep it simple, just to trigger internal API, it will not become your new monolith.
It will allow you do even better processing when your application grows with new microservices, or when you have to support different clients (browser, mobile apps, watch, IOT, etc)
BTW, the example you show sounds like a good exercise, in reality, for most webapps, it looks like over design. I would not break every DB call to its own microservices.
One of the motivations for breaking something to small(er) services is service autonomy, in this case the question is, when the comments service is down should you display the issue or not- if they are always coupled anyway, they probably shouldn't reside in two services, if they aren't then making two calls will let you get this decoupling
That said, you may still need an API Gateway to solve CORS issues with your client
Lastly, comments/byissueid is not a good REST interface the issueId should be a parameter /comments/?issueId=..

Best Practice for Queuing Activities

Am currently working on an Xamarin Form application, which requires me to interact with a Web API. There are, however times, when the call may fail, due to internet connectivity or server breaks down. In such situations, I would like to put the data that needs to be send in a queue, and try later on.
I was able to put in a queue, however, my question is, how I can run some kind of timers so that I can keep trying the API at regular interval for the ones in queue. Could someone guide me in understanding what is the best practice in such scenarios ?
Thanks
If you don't want to implement this yourself there is a Library for that.
Especially take a look at the Retry keyword.

Web Hosting, Web Scaling

I have a simple web application to conduct online exams for the college students. All questions are multiple choice questions. Around 5000 users will be taking up the exam. My backend is mysql and using PHP as the front end. I want to know the hardware configuration for the servers that will be required to host this application and work seamlessly for the required no of users.
I am also looking out for cloud solutions. If I choose Amazone EC2 instances, can some body give me advice on what type of EC2 machine I should go into for this application?
It is impossible to tell the exact specs of the servers that will be required to run your setup, because there are too many variables. However, it is definitely a good question: when I was a student at university, it happened that a professor tried to do this, and didn't do testing: on the exam date, the system got overloaded and the exam had to be cancelled!
Start with testing what you already have. You can use something like the ab tool or JMeter. It will simulate the requested load for you automatically, so you can check how your actual server performs, and act accordingly.
Application design is also important. Like you can cache all the question at web layer to avoid database query. Make client heavy app such that server payload is minimum (json response) to reduce download time load on server.
Request multiple questions at once and Batch user responses to answer question together to decrease ajax calls.
Make use of nosql solution to avoid RDMS constraints overhead.

Anything wrong with moving CLI validation/logic server-side?

I have a client/server application. One of the clients is a CLI. The CLI performs some basic validation then makes SOAP requests to a server. The response is interpreted and relevant information presented to the user. Every command involved a request to a web service.
Every time services are modified server-side, a new CLI needs to released.
What I'm wondering is if there would be anything wrong with making my CLI incredibly thin. All it would do is send the command string to the server where it would be validated, interpreted and a response string returned.
(Even TAB completion could be done with the server's cooperation.)
I feel in my case this would simplify development and reduce maintenance work.
Are there pitfalls I am overlooking?
UPDATE
Scalability issues are not a high priority.
I think this is really just a matter of taste. The validation has to happen somewhere; you're just trading off complexity in your client for the same amount of complexity in your software. That's not necessarily a bad thing for your architecture; you're really just providing an additional service that gives callers an alternate means of accessing your existing services. The only pitfall I'd look out for is code duplication; if you find that your CLI validation is doing the same things as some of your services (parsing numbers, for example), then refactor to avoid the duplication.
in general, you'd be okay, but client-side validation is a good way to reduce your workload if bad requests can be rejected early.
What I'm wondering is if there would be anything wrong with making my CLI incredibly thin.
...
I feel in my case this would simplify development and reduce maintenance work.
People have been doing this for years using telnet/SSH for remoting a CLI that runs on the server. If all the intelligence must be on the server anyway, there might be no reason to have your CLI be a distributed client with intelligence. Just have it be a terminal session - if you can get away with using SSH, that's what I'd do - then the client piece is done once (or possibly just an off-the-shelf bit of software) and all the maintenance and upgrades happen on the server (welcome to 1978).
Of course this only really applies if there really is no requirement for the client to be intelligent (which sounds like the case in your situation).
Using name / value pairs in a request string is actually pretty prevalant. However, at that point, why bother with SOAP at all? Instead just move to a RESTful architecture?

Resources