Which delivers better performance: Batch API or HTTP2 - performance

I have two web applications that sync tickets (or issues) between them.
When users create projects with 10's or 100's of tickets via App A, App A sends individual POST requests to create each corresponding ticket in App B.
When users close tickets in-mass via App B, App B sends individual POST requests to close each corresponding ticket in App A.
The above two use cases are taking a painfully long time when the ticket number exceeds 50, resulting in page timeouts for our users.
In such a scenario:
would implementing both HTTP2 and batch API requests be superfluous? Or would implementing one solution obviate the need for the other?
since these API's are called from within the apps' frontend (they are user-facing), would it make sense to delegate such requests to queued tasks in the background (with email notifications sent to the user once a task is completed), thus avoiding timeouts? Or would this be shying away from the optimization problem?

Related

Azure web app - how to avoid multiple users triggering the same endpoint at time

Is there any way to set a limit on the number of requests for the azure web app (asp.net web API)?
I have an endpoint that runs for a long time and I like to avoid multiple triggers while the application is processing one request.
Thank you
This would have to be a custom implementation which can be done in a few ways
1. Leverage a Queue
This involves separating the background process into a separate execution flow by using a queue. So, your code would be split up into two parts
API Endpoint that receives the request and inserts a message into a queue
Separate Method (or Service) that listens on the queue and processes messages one by one
The second method could either be in the same Web App or could be separated into a Function App. The queue could be in Azure Service Bus, which your Web App or Function would be listening in on.
This approach has the added benefits of durability since if the web app or function were to crash and you would like to ensure that requests are all processed, the message would be processed again in order if not completed on the queue.
2. Distributed Lock
This approach is simpler but lacks durability. Here you would simply use an in-memory queue to process requests but ensure only one is being processed at a time but having the method acquire a lock which the following requests would wait for before being processed.
You could leverage blob storage leases as a option for distributed locks.

Should a websocket connection be general or specific?

Should a websocket connection be general or specific?
e.g. If I was building a stock trading system, I'd likely to have real time stock prices, real time trade information, real time updates to the order book, perhaps real time chat to enable traders to collude and manipulate the market. Should I have one websocket to handle all the above data flow or is it better to have several websocket to handle different topics?
It all depends. Let's look at your options, assuming your stock trader, your chat, and your order book are built as separate servers/micro-services.
One WebSocket for each server
You can have each server running their own WebSocket server, streaming events relevant to that server.
Pros
It is a simple approach. Each server is independent.
Cons
Scales poorly. The number of open TCP connections will come at a price as the number of concurrent users increases. Increased complexity when you need to replicate the servers for redundancy, as all replicas needs to broadcast the same events. You also have to build your own fallback for recovering from client data going stale due to lost WebSocket connection. Need to create event handlers on the client for each type of event. Might have to add version handling to prevent data races if initial data is fetched over HTTP, while events are sent on the separate WebSocket connection.
Publish/Subscribe event streaming
There are many publish/subscribe solutions available, such as Pusher, PubNub or SocketCluster. The idea is often that your servers publish events on a topic/subject to a message queue, which is listened to by WebSocket servers that forwards the events to the connected clients.
Pros
Scales more easily. The server only needs to send one message, while you can add more WebSocket servers as the number of concurrent users increases.
Cons
You most likely still have to handle recovery from events lost during disconnect. Still might require versioning to handle data races. And still need to write handlers for each type of event.
Realtime API gateway
This part is more shameless, as it covers Resgate, an open source project I've been involved in myself. But it also applies to solutions such as Firebase. With the term "realtime API gateway", I mean an API gateway that not only handles HTTP requests, but operates bidirectionally over WebSocket as well.
With web clients, you are seldom interested in events - you are interested in change of state. Events are just means to either describe the changes. By fetching the data through a gateway, it can keep track on which resources the client is currently interested in. It will then keep the client up to date for as long as the data is being used.
Pros
Scales well. Client requires no custom code for event handling, as the system updates the client data for you. Handles recovery from lost connections. No data races. Simple to work with.
Cons
Primarily for client rendered web sites (using React, Vue, Angular, etc), as it works poorly with sites with server-rendered pages. Harder to apply to already existing HTTP API's.

Opentracing - Should I trace internal service work or just API calls?

Suppose I have service which does the following:
Receives input notification
Processes input notification which means:
some computing
storing in DB
some computring
generating it's own notification
Sends its own notification to multiple clients
What is the best practice in this case, should I granularly trace each operation like computing, storing in db etc with separate span or leave that for metrics (i.e. prometheus) and create single span for the whole notification processing?
It's somewhat up to you as to the granularity that's appropriate for your application, and also the volume of tracing data you're expecting to generate. An application handling a few requests per minute is going to have different needs than one handling 1000s of requests per second.
That said, I recommend creating spans when control flow enters or leaves your application (such as when your application starts processing a request or message from an external system, and when your application calls out to an external dependency, such as HTTP requests, sending notifications, or writing/reading from the database), and using logs/tags for everything that's internal to your application.

Strategy to create task scheduler

I am creating an App which allows users to schedule repeated task with friends. I am thinking of ways to create the scheduling queue.
The first way would be to let the task be handled by the server. The task can be created and the scheduling algorithm would run on the server which will send notifications to the involved users. The problem with this approach is that it would overload the server if many users would be using it.
The second approach would be to let the user app handle the task scheduling on their mobile as a background service, this would be ideal as it would not overload the server. The disadvantage of this approach would be that the notifications would depend on the status of the user phone(Internet connection, on/off, etc...).
Which of the two methods would be ideal, or is there some other way to approach this problem?
I would be an adept of running this on the server, because of the following advantages:
only one place to keep all the details for one notification (e.g. message text, schedule). This ensures that we don't somehow end up with clients having different notification data in their client applications. This results in simpler logic if the notification details can be updated, or new friends can be added. Although sometimes underestimated, I think that simplicity should be an important criterion in the architecture choice.
it seems that it is relatively easy to scale horizontally -- multiple message queues can be set up on multiple machines, without interference between the queues.

Prioritizing specific endpoints on Heroku to skip / spend less time in request queue

Our API service has several endpoints and one of them is critical for the user experience because it directly affects the page load time.
Is there a way to priotize calls to GET /api/priority_endpoint over GET /api/regular_endpoint so the prioritized requests spend less time in the requests queue?
Thanks...
No, this is not possible. All requests are sent to a random dyno as soon as they're within the router.
The only way you could do this would be by writing your own request queue in your app's code.

Resources