Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've inherited a system where data from a SQL RDBMS that is unlikely to change is cached on a web server.
Is this a good idea? I understand the logic of it - I don't need to query the database for this data with every request because it doesn't change, so just keep it in memory and save a database call. But, I can't help but think this doesn't really give me anything. The SQL is basic. For example:
SELECT StatusId, StatusName FROM Status WHERE Active = 1
This gives me fewer than 10 records. My database is located in the same data center as my web server. Modern databases are designed to store and recall data. Is my application cache really that much more efficient than the database call?
The problem comes when I have a server farm and have to come up with a way to keep the caches in sync between the servers. Maybe I'm underestimating the cost of a database call. Is the performance benefit gained from keeping data in memory worth the complexity of keeping each server's cache synchronized when the data does change?
Benefits of caching are related to the number of times you need the cached item and the cost of getting the cached item. Your status table, even though only 10 rows long, can be "costly" to get if you have to run a query every time: establish connection, if needed, execute a query, pass data over the network, etc. If used frequently enough, the benefits could add up and be significant. Say, you need to check some status 1000 times a second or every website request, you have saved 1000 queries and your database can do something more useful and your network is not loaded with chatter. For your web server, the cost of retrieving the item from cache is usually minimal (unless you're caching tens of thousands or hundreds of thousands of items). So pulling something from the cache will be quicker than querying a database almost every time. If your database is the bottleneck of your system (which is the case in a lot of systems) then caching definitely is useful.
But bottom line is, it is hard to say yes or no without running benchmarks or knowing the details of how you're using the data. I just highlighted some of the things to consider.
There are other factors which might come into play, for example the use of EF can add considerable extra processing to a simple data retrieval. Quantity of requests, not just volume of data could be a factor.
Future design might influence your decision - perhaps the cache gets moved elsewhere and is no longer co-located.
There's no right answer to your question. In your case, maybe there is no advantage. Though there is already a disadvantage to not using a cache - you have to change existing code.
Related
I have a few queries to a database that return absolutely constant responses, i.e. some entries on this database are never changed after written.
I'm wondering if I'm to implement caching on them with Redis, should I set an expiration time?
Pros and cons of not doing that -
Pros: Users will always benefit from caching (except for the first query)
Cons: The number of these entries to be queried is growing. So Redis will end up using more and more memory.
Edit
To give more context, the queries run quite slow. Each of them may take seconds. It will be beneficial to minimize the number of users that experience this.
Also, each of these results has size around the magnitude a several kB; The number (not size) of entries may be increasing for 1 per minute.
Sorry for answering with questions. Still waiting for enough reputation to comment and clarify.
Answering your direct question:
Are the number of queries you expect unbounded?
No: You could improve first user experience by triggering the queries on startup and leaving in cache. Other responses that are expected to change you could attach a TTL to and use any of the following maxmemory-policy settings in the config: volatile-ttl, allkeys-lru, 'volatile-lfu, or volatile-random` to only evict keys with TTLs.
Yes: Prioritize these by attaching a TTL and updating each time it's requested to keep in cache as long as possible and use any of the memory management policies that best fit the rest of your use case.
Related concerns:
If these are really static values, why are you querying a database rather than reading from a flat file of constants generated once and read at startup?
Have you attempted to optimize your queries?
I have a Redshift database with an insane quantity of data, we use a live connection and tableau_online. The problem is that the load times are really high, almost five minutes. What can I try to improve this?
There are hundreds of factors that impact performance. I would start with workbook optimization. Workbook Performance Tips
I have been in a similar situation myself, although with a different DB, basically you have to make a choice between a live connection to your DB, which, as you've seen, can suffer performance issues if you have a lot of data, or an extract.
Tableau want you to use extracts because this where they can really help you improve performance of the workbook over large data sets, but I have been in situations where there is a requirement for live data and Tableau's extracts schedules did not suit my needs.
If you have no options but to use a live connection then consider whether you could partition your data and connect the workbook to a part of it to improve performance, or possibly pre-aggregate some of the historical data to make it more manageable.
May also be worth thinking about whether you need the whole dashboard to connect to live data, or if you could feed live data via a smaller query to a couple of workbooks and have the rest feed off extracted data.
As I'm sure you can see, there is no one-stop solution, it depends what works best for you and the users of your reports.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know this may be a repeat of the questions but I started using the WebPerformanceTest and LoadTest in my web projects.I could run the WebPerformanceTest and Loadttest.
Now what are the parameters/statistics that I need to share with the Dev team or Busniess team?I think of these..But it would be great if somoeone share what are the other parameters I might have to consider sharing..
1.No.of users the application can support
2.Reposne time what the application can give under the sustainable load
following things you can consider for sharing,
if SLA's are mentioned by Dev team or stakeholders and if your performance test shows that the web application is not matching those SLA's then you can share that
Next question comes in your and their mind is why? (try finding out which part/tier is taking most time or a bottleneck). This can be done by analyzing logs or use profiler which will give you costly things,slow compnonents
Next question is job of performance engineer (how to resolve them and improve the performance of my application). If you know application very well then try tuning it and get the improvement results after tuning which should be shared with Dev team or stakeholders.
Maximum number of users may be confusing if you do not limit response time. For 100ms requests 10 simultaneos users mean 100 rps (requests per second) and for 10s requests 100 simultaneous users mean only 10 rps.
If you use simple hit-based testing (e.g. testing single page or specific request performance) it could be better to use rps metric instead.
For response time - mean time could be confusing as well, especially in case of high variance of response time, it's better to provide response time for some percentiles.
I.e. 50% in 50ms, 75% in 55 ms, 90% in 60 ms, 95% in 70 ms, 99% in 90 ms and 100% in 10 sec. With average time of 150 ms. For some services 150 ms is very good, but about 1% of really slow answers is unacceptable and you hardly can find that problem using just mean and medium response time.
Also, in my experience, collecting resource usage stats (cpu, memory, I/O intensity and network usage) is very helpful for determining bottlenecks (i.e. service slow-down due to high I/O because of insufficient amount of memory for caches).
Are you asking the right question?
For me a big part of load and performance testing is deciding what my customer wants to learn about the system being tested. There is an element of "what data can I show the customer?" but that is based on interpreting what they ask for. The customer may not know what to ask, your job as a tester is to understand what the customer wants and provide them with the answers they want.
The two topics you list show how the system appears to its users: when it will break and how fast it responds. There are several variations on those factors based on rate-of-change of user load and on duration of the test.
Other factors include the performance of the various parts of the server computers that are being tested. Visual Studio load tests can collect performance data from other computers while the test runs. So they can monitor the web server(s), database server(s), application server(s) and so on. On each of these servers data about CPU and memory usage, SQL and IIS performance, and many more can be collected. All this data can be compared (most easily via graphs) against user load, error rates and transaction times to determine which parts of the system have plenty of headroom, which are busy and where the bottlenecks occur. Monitoring all this data may also reveal threshold warnings from the various servers, they should be checked against the Microsoft documentation and, perhaps, other sources to determine whether they are adversely affecting system performance and whether they should be investigated in more detail.
These and many other ideas are possible but it all goes back to working out what your customer wants to learn.
The same question was asked on another forum and the above words are almost identical to the answer I posted there.
You can furnish following details to your clients:
Response Time
Hits per Second
Throughput
Connections Per Second
First Time to buffer
Number of Errors
Transactions Graph
CPU, Memory, and Disk Utilization
Network Utilization (if applicable)
Number of database inserts/updates/deletes records
It sounds like you simply have no (or exceedingly poor) requirements and you don't have a great depth in the field of performance testing and engineering. As far as what to collect
Before the test:
Full load profile of business functions that make up the load.
Documentation of each business function. Items to time within each business function.
Expected response times for each of the timed business functions
Pay special attention to think times and iteration pacing
Web logs from the current system so you can objectively measure how many people are on the system at any given time, not how many sessions are alive and have not yet timed out.
Test Environment with some defined match level to the production environment to scale your load appropriately.
In the test
Response times matched to the timing of the business functions on the requirements / user stories
Other enumerated datapoints for requirements (hits, volume returned, etc...)
A measurement of any finite resource in the system under test for bottleneck identification for slow response times. You can start at the top level (CPU, DISK, MEMORY, NETWORK) and work your way down through those stats as you find a resource constriction at the top level.
Post Test:
Executive overview: Did you hit the requrements (YES|NO)
Detailed data: response times, monitor peaks
Analysis: Where is the likely bottleneck holding your back
If you are attempting to represent human behavior then under no circumstance should you eliminate think time. Think time, or time between requests on an individual session, is baked into the definition of the client-server model and as you reduce it to zero your test becomes less and less a predictor of what will happen in production
Typically, it is based on the benchmark that you want to achieve with the given hardware and environment.
Following are the key parameters
No.of concurrent users (manual and system threads)
Load of transactional and existing data
Response time (typically page)
Throughput Utilization of CPU, Memory and Disk IOs and Network
Bandwidth(applicable where there is an integration with peripheral
systems )
Success percentage
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I want to scrape a big amount of webpages (1000/second) and save 1-2 numbers from this web pages into a database. I want to manage this Workers with RabbitMQ, but I also have to write the data somewhere.
Heroku PostgreSQL has a concurrency limit of 60 requests in their cheapest production tier.
Is PostgreSQL the best solution for this job?
Is it possible to setup a Postgres Database to perform 1000 writes per second in development on my local machine?
Is it possible to setup a Postgres Database to perform 1000 writes per second in development on my local machine?
Try it and see. If you've got an SSD, or don't require crash safety, then you almost certainly can.
You will find that with anything you choose, you have to make trade-offs with durability and write latencies.
If you want to commit each record individually, in strict order, you should be able to achieve that on a laptop with a decent SSD. You will not possibly get it on something like a cheap AWS instance, a server with a spinning rust hard drive, etc though, as they don't have good enough disk flush rates. (pg_test_fsync is a handy tool for looking at this). This will be true of anything that's doing genuine atomic commits of individual records to durable storage, not just PostgreSQL - about the best rate you're going to get is the max disk flush rate / 2 unless it's a purely append-only system, in which case the commit rate can be equal to the disk flush rate.
If you want to get higher throughput, you'll need to batch writes together and commit them in groups to spread the disk sync overhead. In the case of PostgreSQL, the commit_delay option can be useful to batch commits together. Better still, buffer a few changes client-side and do multi-valued inserts. Turning off synchronous_commit for a transaction if you don't need a hard guarantee it's committed before returning control to your program.
I haven't tested it, but expect Heroku will allow you to set both these params on your sessions using SET synchronous_commit = off or SET commit_delay = .... You should test it and see. In fact, you should do a simulated workload benchmark and see if you can make it go fast enough for your needs.
If you can't, you'll be able to use alternate hosting that will with appropriate configuration.
See also: How to speed up insertion performance in PostgreSQL
PostgreSQL is perfectly capable of handling such job. Just to give you an idea, PostgreSQL 9.2 is expected to handle up to 14.000 writes per second, but this largely depends on how you configure, design and manage the database, and on the available hardware (disk performance, RAM, etc.).
I assume the limit imposed by Heroku is to avoid potential overloads. You may want to consider an installation of PostgreSQL on a custom server or alternative solutions. For instance, Amazon recently announced the support for PostgreSQL on RDS.
Finally, I just want to mention that for the majority of standard tasks, the "best solution" is largely dependent on your knowledge. An efficiently configured MySQL is better than a badly configured PostgreSQL, and vice-versa.
I know companies that were able to reach unexpected results with a specific database by highly optimizing the setup and the configuration of the engine. There are exceptions, indeed, but I don't think they apply to your case.
I recently completed development of a mid-traficked(?) website (peak 60k hits/hour), however, the site only needs to be updated once a minute - and achieving the required performance can be summed up by a single word: "caching".
For a site like SO where the data feeding the site changes all the time, I would imagine a different approach is required.
Page cache times presumably need to be short or non-existent, and updates need to be propogated across all the webservers very rapidly to keep all users up to date.
My guess is that you'd need a distributed cache to control the serving of data and pages that is updated on the order of a few seconds, with perhaps a distributed cache above the database to mediate writes?
Can those more experienced that I outline some of the key architectural/design principles they employ to ensure highly interactive websites like SO are performant?
The vast majority of sites have many more reads than writes. It's not uncommon to have thousands or even millions of reads to every write.
Therefore, any scaling solution depends on separating the scaling of the reads from the scaling of the writes. Typically scaling reads is really cheap and easy, scaling the writes is complicated and costly.
The most straightforward way to scale reads is to cache entire pages at a time and expire them after a certain number of seconds. If you look at the popular web-site, Slashdot. you can see that this is the way they scale their site. Unfortunately, this caching strategy can result in counter-intuitive behaviour for the end user.
I'm assuming from your question that you don't want this primitive sort of caching. Like you mention, you'll need to update the cache in place.
This is not as scary as it sounds. The key thing to realise is that from the server's point of view. Stackoverflow does not update all the time. It updates fairly rarely. Maybe once or twice per second. To a computer a second is nearly an eternity.
Moreover, updates tend to occur to items in the cache that do not depend on each other. Consider Stack Overflow as example. I imagine that each question page is cached separately. Most questions probably have an update per minute on average for the first fifteen minutes and then probably once an hour after that.
Thus, in most applications you barely need to scale your writes. They're so few and far between that you can have one server doing the writes; Updating the cache in place is actually a perfectly viable solution. Unless you have extremely high traffic, you're going to get very few concurrent updates to the same cached item at the same time.
So how do you set this up? My preferred solution is to cache each page individually to disk and then have many web-heads delivering these static pages from some mutually accessible space.
When a write needs to be done it is done from exactly one server and this updates that particular cached html page. Each server owns it's own subset of the cache so there isn't a single point of failure. The update process is carefully crafted so that a transaction ensures that no two requests are not writing to the file at exactly the same time.
I've found this design has met all the scaling requirements we have so far required. But it will depend on the nature of the site and the nature of the load as to whether this is the right thing to do for your project.
You might be interested in this article which describes how wikimedia's servers are structured. Very enlightening!
The article links to this pdf - be sure not to miss it.