Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
My spring OAuth2.0 authorization micrservice is extremely slow. It takes 450+ms to check a token.Generating tokens takes 1.6s and above. What could be the reason? How can I improve the performance of my microservice ?
Details:
Auth and Microservices are running on my laptop
The time I mentioned is for the auth server with requests from only one microservice
Thanks in advance
Download a tool such as VisualVM to perform profiling of your application.
I would also record the elapsed time of individual methods to determine exactly which methods are taking the longest amounts of time.
Once you can verify exactly what code is taking awhile, you can attempt JVM optimizations, or review the code (if you're using an external library) and verify the implementation.
There might be three reasons,
Your services might be in different regions and OAuth2 server might be central one and in different region. If this is the case create instance of OAuth servers in all regions which you use so that your latency can be improved.
Check the Encryption techniques you used. always it's preferred to use SHA-256 Hashing but this might not be complete reason in some cases this could help.
Check your OAuth server Capacity, i.e. it's RAM processor and Storage volume. It might also be reason that multiple services makes same /generatetoken call to server and Tomcat makes it as One Thread per request and if this the case configuring your connection pool will also help.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 months ago.
Improve this question
My question is theoritical (I am not asking the steps about scaling) and related to keep the same performance.
For example our web site (Spring Boot based) is visited 100 person / day and after a year is şs started 1.000.000 visit per day. In this situation, I have the following ideas basically, but need to know more and if these ideas are good or bad:
Using Cloud services
LOad balancer
Using microservices and applying distributed system techniques.
If read operations are much more than write or update, a NoSQL db can be used.
If we use jwt token for authentication, dstributed system would not a problem for security auth side I think.
... etc.
Could you pls share your ideas and comment the idea above? Any help would be appreciated.
There have been several POC( proof of concept ) and proved deployment strategies for better availability.
Keeping your points, I am summarizing and possibly giving a bit more clarity!
Using Cloud services --> This is the platform you choose for e.g. One can choose on-premise service deployment or on cloud such as AWS,Azure GCP etc. Not related to scalability question at the moment.
Load balancer --> Balance the load when you have multiple instances of your Microservice, so for e.g. You can create docker images of your microservice & deploy as a Pod on Kubernetes platform where you can have more than one Replicas (Replica is copy of your same service). Load balancer will balance the HTTP requests among multiple pods.
Using microservices and applying distributed system techniques --> You can but make sure to adhere to best practices and proven Microservice deployment strategies. Read more about the more about them here https://www.urolime.com/blogs/microservices-deployment-strategies/
If read operations are much more than write or update, a NoSQL db can be used. --> Definitely, infact you can decompose your microservice based on number of transactions or read/write operations & you can use NoSql DB like Couchbase or MongoDb
If we use jwt token for authentication, dstributed system would not a problem for security auth side I think. --> Again such mechanisms are usually centralized and JWT token has some time validity!
So there might be several other options of scaling but most used is the one I mentioned in point 2.
I highly suggest you get a grip on basics, Here are few links which would be helpful!
https://microservices.io/patterns/microservices.html
https://medium.com/design-microservices-architecture-with-patterns/decomposition-of-microservices-architecture-c8e8cec453e
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I will be appritiated if anyone answer to below quesion.
How a system will be designed with hundreds of services if each and every service has to be independent with a dedicated port as per microservices architecture? i mean is it a good practice to open hundreds of ports on OS for example?
Best Regards.
For security reasons microservices are hosted in private vpc, i.e. the nodes (where the microservices are run) does not have public ip. And the only way to get access to them is via a gateway api (see below). Also "each and every services has to be independent" should be in the means of domain link1 link2.
To expose services use the API gateway pattern: "a service that provides a single-entry point for certain groups of microservices" link1 link2. Note that api gateway is for a group of microservices, i.e. there may be several gateways for different groups of services (one for public api, one for mobile api, etc).
Only you can answer this question because only you knows what problem you try to solve. Before deciding I recommend to read about MonolithFirst approach
Micro services architecture is somehow the next generation of ESB products but in this case due to the high number of services,I am not sure if it is a solution!
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Im currently new to end to end testing and planning to do load testing for a website I am currently working on. Im currently looking into jmeter and studying how to use it. My question is, would it make sense to only use one credential for the test? So basically I will be using my credentials then would throwing same HTTP requests multiple times to the server to simulate several users logging in and using the website.
Also if there are other ways to do load testing without using more than one credentials would be helpful!
Thanks in advance for the help!
It depends on your use cases and your site implementation, possible problems could be:
The site may not allow multiple logins under the same credentials like subsequent login will "throw out" the previously logged in user(s)
Depending on how session is being established/maintained you may receive the same Cookies for the same login
Most probably you will be able to implement browsing, but CRUD operations can be a big question mark
From JMeter's perspective it is not a problem to use only one account, any constraints will be on the system under test side.
Ideally you should treat each JMeter thread (virtual user) as the real user and it worth creating that many users as you plan to simulate and use CSV Data Set Config to parameterize your JMeter test so each virtual user could have its own credentials
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a Visual Studio load test that runs through the pages on a website, but have experienced big differences in performance when using a load balancer. If I run the tests going straight to Web Server 1 bypassing the load balancer I get an average page load time of under 1 second for 100 users as an example. If I direct the same test at the load balancer with 2 web servers behind it then I get an average page load time of about 30seconds - it starts quick but then deteriorates. This is strange as I now have 2 web servers load balanced instead of using 1 direct so I expect to be able to increase load. I am testing this with Azure Web Application Gateway now, and Azure VMs. I have experienced the same problem previously with an NGinx setup, I thought it was due to that setup but now I find I have the same on Azure. Any thoughts would be great.
I had to completely disable the firewall to get the consistent performance. I also ran into other issues with the firewall, where it gave us max entity size errors from a security module and after discussing with Azure Support this entity size can not be configured so keeping the firewall would mean some large pages would no longer function and get this error. This happened even if all rules were disabled, I spent a lot of time experimenting with different rules on/off. The SQL injection rules didn't seem to like our ASP.NET web forms site. I have now simulated 1,000 concurrent users split between two test agents and the performance was good for our site, with average page load time well under a second.
Here are a list of things that helped me to improve the same situation:
Add non-SSL listener and use that (e.g. HTTP instead of HTTPS). Obviously this is not the advised solution but maybe that can give you a hint (offload SSL to the backend pool servers? Add more gateway instances?)
Disable WAF rules (slight improvement)
Disable WAF + Added more gateway instances (increased from 2 to 4 in my case) - SOLVED THE PROBLEM!
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
How come that The response time is very different when calling the same action/page in different times of day ? I'm working in an internal server where I'm the only one who uses the application (which doesn't work with internet connection)
I'm not connected to a network, and there is only one user who is running the app (which is me). It's a ASP site with a remote database
Once again, where are you going to start? You're seriously going to need to look at all aspects of the server that the application is on.
If you have a connected database then you'll need to look at whether:
the database is on a remote server - network issues can interfere quite heavily with your timings here.
the same server - if this is an instanced database you will need to take into account the performance impact of the service that is managing your database and all of the related aspects of that (e.g. do you have any kind of agents running background tasks for the database?).
Are you running a standalone database like Ms Access? - this may cause the least disruption in some ways but can be disastrous in others.
What type of web-application are you looking at?
A simple scripted non-managed IIS ASP site - Very little to manage via IIS here; no need to section off a pool for the application.
A full blown IIS managed application - IIS managed, passing of cookies, credentials etc (all takes slices of time).
If you are connected to a network, then...
How many users are on the network - Though every machine on the network may have a negligible impact on your application server or PC, there are definitely some that do, such as DNC servers and what have you; they need to gather network information for the successful management and running of the network as a whole. Your application server will also communicate with other servers to say things like: "Hi! I'm over here!".
Perhaps the most important question should be regarding your server(s):
What services are running - every service that runs on your server swallows time slices.
What services are not running on your server? - to keep your timings realistic should you stop any services or (more importantly) not?
What services are running on your database server? - just as important as your main application server, your database server needs time to furnish data to your application. If there are other services running on here then this can impact heavily on your time.
Please everyone, chip in here - there's just so much to take into account.
By not giving an adequate qualification for your task it's very difficult for anyone to give a wholly valid answer.