Currently we are running a NodeJS webApp using serverless. The API Gateway is using a single API endpoint for the entire application and routing is handled internally. So basically single http {Any+} endpoint for entire application.
My question is,
1, Whats the disadvantage of this method?? ( I know lambda is build for FaaS but right now we are handling it as a monolithic function.)
2, How much instance can lambda run at a time if we are following this method? Can it handle a million+ request at single time?
Every help would be appreciated. Thanks!
Disadvantage would be as you say - it's monolithic so you've not modularised your code at all. The idea is that adjusting one function shouldn't affect the rest, but in this case it can.
You can run as many as you like concurrently; you can set limits though (and there are some limits initially for safety which can be removed).
If you are running the function regularly it should also 'warm start' i.e. have a shorter boot time after the first time.
Related
I am currently using Jmeter to test the response time for an API. Lets call it API A. If API A calls API B, which is hosted on the same server but different port, is there a way for me to capture the response time of API B using Jmeter?
I realize there is a similar question here which is trying to accomplish the same thing but it does not work for me. I don't see the internal call to API B.
JMeter knows nothing about what's going on under the hood of your application, it sends a HTTP Request, waits for the response and measures the time taken as well as some other metrics.
If there is some extra activity under the hood of an API call the only way to capture it is using a profiler tool or an APM tool at the application under test end.
You could not. Jmet is an outsider, Jmet only know there is API-A, and doesn't know the internal implementation(that API-A calls API-B).
A better design would be, for each APIs, itself should metric the total run time and log into metric server. There's a lot of options about server side metrics system you could explore.
We are planning to migrate from monolith to micro-services based architecture. Now i own the responsibility of talking a module out of monolith.
Existing Monolith:
1) Code is very tightly coupled.
2) APIs are called recursively with different parameters.
3) Some of the calls with-in the module which i am planning to extract out contains calls to a system which takes approx 9 minutes to complete.
Unfortunately that's a synchronous.
Points to note:
1) I am starting with a single api migration which is a very important one and is not performing well.
2) This api consists of parallel calls to another system for performing
bunch of tasks. All the calls are blocking and time-consuming (consider
avg response time to be 5-6 min)
Moving to microservice based architecture : There are 2 approaches that comes to my mind while moving the aforementioned api from monolith to a separate microservice, along with solving the problem of blocking threads due to time taking blocking calls.
a) moving in phases :
- Create a separate module
- In this module provide an api to push events to kafka, another
module will in-turn process the request and push the response back
to kafka
- monolith for now will call above mentioned api to push events to
kafka
- New module will inturn call back the monolith when the task
complete (received response on a separate topic in kafka)
- Monolith once get response for all the tasks will trigger some post
processing activity.
Advantage:
1) It will solve the problem of sync- blocking call.
Disadvantage:
1) Changes are required in the monolith, which could introduce some
bugs.
2) No fallbacks are available for the case if bug gets introduced.
b) Move the API at once to the microservice :
Initially which will share common
data source with the monolith and solve the problem of blocking calls
via introduction of kafka between new microservice and the module which
takes time to process the request.
Advantage:
1) Fallback is available in monolith
Disadvantage:
1) Initially data source is shared between the systems.
What should be the best approach to do these kinds of complex tasks ?
You have to take care of several things.
First
Going to microservice will be slower (90% of the time) than a monolith because you introduce latency. So never forget it when you go with it.
Second
You ask if it is a good way to go with kafka. I may answer yes in most of the case but you mentioned that today the process is synchronous. If it is for transactional reasons, you won't be able to solve it with a message broker I guess because you'll update your strong consistency system to an eventually one. https://en.wikipedia.org/wiki/Eventual_consistency
I am not saying that it is a bad solution only that it change your workflow and may impact some business rules.
As a solution I offer this:
1 - Break the seams in your monolith by introducing functional key and api composition inside the monolith (read Sam Newman's book to help).
2 - Introduce the eventual consistency inside the monolith to test if it fits the purpose. It will be easier to rollback if not.
No you have to possibility:
The second step went well so go ahead and put the code of the service into a microservice out of the monolith.
The second step did not fit then think about doing the risky thing in a specific service or use distributed transactions (be careful with this solution it could be hard to manage).
I believe the best approach would be moving option 1: Moving in phases. However, it is possible to do it while having a fallback strategy. You can keep the a version of the untouched backend to serve as a backup if your new service encounters issues.
The approach is described in more details in the article: Low risk monolith to microservice evolution It provides more details in the implementation and the thought processes behind why a phased approach has lower risk. However, the need to change the backend would still be present, but hopefully mitigated through unit testing.
I'm trying to get to grips with service fabric and I'm struggling a little bit. Some questions:
are all service fabric service instances single-threaded? I created a stateless web api, one instance, with a method that did a Task.Delay, then returned a string. Two requests to this service were served one after the other, not concurrently. So am I right in thinking then that the number of concurrent requests that can be served is purely a function of the service instance count in the application manifest? Edit Thinking about this, it is probably to do with the set up of OWIN Wep Api. Could it be it is blocking by session? I assumed there is no session by default?
I have long-running operations that I need to perform in service fabric (that can take several hours). Is there a recommended pattern that I can use for this in service fabric? These are currently handled using a storage queue that triggers a webjob. Maybe something with Reliable Queues and a RunAsync loop?
It seems you handled the first part so I will comment on the second part: "long-running operations".
We can see long running operations / workflows being handled far before service fabric came about. For this reason, we can build on the shoulders of giants by looking on the design patterns that software experts have been using for decades. For example, the famous and all inclusive Process Manager. Mind you that this pattern is sometimes an overkill. If it is in your case, just check out the rest of the related patterns in the Enterprise Integration Patterns book (by Gregor Hohpe).
As for the use of reliable collections, those are implementation details when choosing a data structure supporting the chosen design pattern.
I hope that helps
With regards to your second point - It really depends on the nature of your long running task.
Is your long running task the kind of workload that runs on an isolated thread that depends on local OS/VM level resources and eventually comes back with a result (A)? or is it the kind of long running task that goes through stages and builds up a model of the result through a series of persisted state changes (B)?
From what I understand of Service Fabric, it isn't really designed for running long running workloads (A), but more for writing horizontally-scalable, highly-available systems.
If you were absolutely keen on using service fabric (and your kind of workload tends to be more like B than A) I would definitely find a way to break down those long running tasks that could be processed in parallel across the cluster. But even then, there is probably more appropriate technologies designed for this such as Azure Batch?
P.s. If you are going to put a long running process in the RunAsync method, you should design the workload so it is interruptable and its state can be persisted in a way that can be resumed from another node in the cluster
In a stateful service, only the primary replica has write access to
state and thus is generally when the service is performing actual
work. The RunAsync method in a stateful service is executed only when
the stateful service replica is primary. The RunAsync method is
cancelled when a primary replica's role changes away from primary, as
well as during the close and abort events.
P.s.s Long running operations are the devil when trying to write scalable systems. Try and tackle that now and save yourself the future pain if possibe.
To the first point - this is purely a client issue. Chrome saw my requests as indentical and so delayed the 2nd request until the 1st got a response. Varying the parameter of the requests allowed them to be served concurrently.
We have a webservice, which will be called to provide the delivery date of the product, while purchasing in eComm website.
We are using IBM Sterling Order Management in the backend, and its OOB webservice and its OOB service.
This webservice (WSDL) is taking more time, more than 40 seconds, which create timeoutexception in other integrated systems (Middleware).
So we want to improve the performance of this webservice. Could you please help me to provide the way to improve the performance ? Will it be improved if the Server's spec has been upgraded ? As it the OOB service, we can't customize it.
First of all you need to figure out the performance bottleneck. To start with you could put a verbose trace on the OOB Webservice. Use the logs and see if you can zero-in on any particular component or sql taking consuming majority of the time. If it's sql, you can tune/baseline the OOB query/tables using indexes.
If you have any user exits implemented (for the OOB API), ensure that they are lean and aren't making any expensive API calls like changeOrder API.
One of the questions to be asked here would be if the webservice needs to respond with the actual processing results or if it could move the actual processing to the background eg: separate integration server and just respond with a simple acknowledgement of the webservice request. If the service only needs to respond with an acknowledgement you could possibly move the actual processing to a separate async service.
First try to find out where the actual problem is and hence here the few pointers,
1) Check in OMS how much time the service is taking with the same input which you are using ti invoke the webservice.
2) If from OMS end response time is fine then check the network latency/bandwidth.
3) CPU usage while hitting the webservice.
This is my setting: I have written a .NET application for local client machines, which implements a feature that could also be used on a webpage. To keep this example simple, assume that the client installs a software into which he can enter some data and gets some data back.
The idea is to create a webpage that holds a form into which the user enters the same data and gets the same results back as above. Due to the company's available web servers, the first idea was to create a mono webservice, but this was dismissed for reasons unknown. The "service" is not to be run as a webservice, but should be called by a PHP script. This is currently realized by calling the mono application via shell_exec from PHP.
So now I am stuck with a mono port of my application, which works fine, but takes way too long to execute. I have already stripped out all unnecessary dlls, methods etc, but calling the application via the command line - submitting the desired data via commandline parameters - takes approximately 700ms. We expect about 10 hits per second, so this could only work when setting up a lot of servers for this task.
I assume the 700m are related to the cost of starting the application every time, because it does not make much difference in terms of time if I handle the request only once or five hundred times (I take the original input, vary it slighty and do 500 iterations with "new" data every time. Starting from the second iteration, the processing time drops down to approximately 1ms per iteration)
My next idea was to setup the mono application as a remoting server, so that it only has to be started once and can then handle incoming requests. I therefore wrote another mono application that serves as the client. Calling the client, letting the client pass the data to the server and retrieving the result now takes 344ms. This is better, but still way slower than I would expect and want it to be.
I have then implemented a new project from scratch based on this blog post and get stuck with the same performance issues.
The question is: am I missing something related to the mono-projects that could improve the speed of the client/server? Although the idea of creating a webservice for this task was dismissed, would a webservice perform better under these circumstances (as I would not need the client application to call the service), although it is said that remoting is faster than webservices?
I could have made that clearer, but implementing a webservice is currently not an option (and please don't ask why, I didn't write the requirements ;))
Meanwhile I have checked that it's indeed the startup of the client, which takes most of the time in the remoting scenario.
I could imagine accessing the server via pipes from the command line, which would be perfectly suitable in my scenario. I guess this would be done using sockets?
You can try to use AOT to reduce the startup time. On .NET you would use ngen for that purpoise, on mono just do a mono --aot on all assemblies used by your application.
AOT'ed code is slower than JIT'ed code, but has the advantage of reducing startup time.
You can even try to AOT framework assemblies such as mscorlib and System.
I believe that remoting is not an ideal thing to use in this scenario. However your idea of having mono on server instead of starting it every time is indeed solid.
Did you consider using SOAP webservices over HTTP? This would also help you with your 'web page' scenario.
Even if it is a little to slow for you in my experience a custom RESTful services implementation would be easier to work with than remoting.