I am new to AWS Redshift. Although i have read the concepts, I wanted to know how to proceed with the Load testing in RedShift. I have been very comfortable with the GRINDER, but rather confused how to use with RedShift.
my basic requirement is to push certain number of rows and measure the query and server performance. I have been doing much performance review on cloud where MySQL, Cassandra etc has been deployeed.Please help me with some concept or tool to start with the Load testing.
Grinder is inappropriate for Redshift as it is designed to test the impact of multiple concurrent users reading and writing data. Redshift performs poorly for single row inserts as per the docs.
For Redshift you should be evaluating the performance at certain data sizes and query complexity. Look into the Star Schema Benchmark or TPC-H.
Related
I have 6 services talking to the same database SQL Server 2016 (Payments) where some services are doing write operations and some are doing read operations. Database server holds other databases as well than Payments database. We do not have any archival job in place on Payments database. We recently got 99% CPU usage and as well as memory issue on database server.
Obvious steps I can take including
Create archival jobs to migrate old data to archived database
Can scale up database server.
But still want to explore other best solutions. I have below questions.
Can we make different databases for read and write operations, if yes how?
Can we migrate data on the fly to NoSql database from RDBMS because it is faster for read operation?
What is the best design for such applications where concurrent read and write operations happens?
Storage is all about trade-offs, so it is extremely tricky to find correct "storage" solution without diving deep in different aspects such as latency, availability, concurrency, access pattern and security requirements. In this particular case, payments data is being stored which should be confidential and straightforward removes some storage solutions. In general, you should
Cache the read data, but if the same data is being modified
constantly this will not work. Caching also doesn't work well when
your reads are not public (i.e., can not be reused across multiple
read calls, preferrably across multiple users), which is possible in this case as we are dealing with payments data.
Read/write master database and read-only slaves pattern is also "common" pattern to scale reads. It doesn't scale the writes though. It again depends if the application can work with "replication lag".
Sharding is the common access pattern to scale writes. It comes with other burden of cross node query aggregation etc (in some databases).
Finally, based on the data access pattern, refactor the schema
and employ different databases. CQRS (Command Query Responsibility
Segregation) is one way to achieve it, but it comes at it has its
own pros and cons. For more details: https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs
Few years back, I read this book which helped me immensely in understanding these concepts: https://www.amazon.com/Scalability-Startup-Engineers-Artur-Ejsmont/dp/0071843655
We have a need to run analytics queries on the data stored in rds. And that's becoming very very slow because of group by queries and ever increasing size of the tables.
For example we have following 3 tables in RDS :
alm(id,name,cli, group_id, con_id ...)
group(id, type,timestamp ...)
con(id,ip,port ...)
each of the tables have very high amount of data and are being updated several times a minute as the new data comes in.
Now we want to run aggregation queries like :
select name from alm, group, con where alm.group_id=group.id and alm.con_id=con.id group by name, group.type, con.ip
We also want users to run custom aggregation queries in the future as opposed to the fix query provided by us in future.
So far the options we are considering are moving to either Cassandra, Elasticsearch or Dynamo db so that aggregation would be faster. Can someone guide as to how to go about this problem ? Or any crumbs of experience ? Anybody know any technologies have severe advantage over others ?
Cassandra and DynamoDB are quite different from ElasticSearch. And all three are very different from relational database offerings.
For ad-hoc analytics, relational databases, with a well designed schema, can be pretty good up to the point where you need to split your data across multiple servers (then replication issues start to dominate the benefits). And that's really the primary motivation for non-relational databases. But the catch is that in order to solve the horizontal scaling problem, they generally trade some features such as joining and aggregating.
Elastic search is really great at answering search queries, but not particularly good at aggregations (other than very basic counts, sums and their estimates). It's amazing at indexing copious amounts of data but it can't answer queries that require complex cross index operations. It is also not as robust (rebuilding indexes may be needed from time to time)
If you have high volumes of data and you need aggregation, you pretty much have two options:
if you can get away with offline analytics, then distributed data processing frameworks such as Spark can get you the answers you need very efficiently
if you need online analytics, the most common approach is to pre-compute the aggregations and update as you get more data, so that answers to queries can be very fast without having to process a lot of data for each query
Don't be afraid to mix and match though. Relational databases have their purpose as do non-relationals. There is no silver bullet though.
One more options is Column-oriented databases, this kind of DB is more suitable for 'analytics' cases when you have many data fields and you want to perform aggregations or extract some subset of fields for big amount of data.
Recently Yandex ClickHouse becomes very popular and there is Column Oriented service from Amazon - Redshift. Also there are several other solutions
Store in parquet and use spark, partition efficiently
We are currently interested in evaluating datameer and have a few questions. Are there any datameer users that can answer these questions:
Since datameer works off HDFS, are the querying speeds similar to that of Hive? How does the querying speed compare with columnar databases?
Since Hadoop is known for high latency, is it advisable to use datameer for real time quering?
Thank you.
Ravi
Regarding 1:
Query speeds are comparable to Hive.
But Datameer is a lot faster in the design phase of your "query". Datameer provides a real time preview how the results of your "query" would look like, which is happening in memory and not on the cluster. The preview is based on a representative sample of your data. It's only a preview not the final results, but it gives you constant feedback if your analytics make sense while designing.
To test a Hive query you would have to execute it, which makes the design process very slow.
Datameer's big advantage over Hive is:
Loading data into Hadoop is much easier. No static schema creation, no ETL, etc. Just use a wizard to download data from your database, log files, social media, etc.
Designing analytics or making changes is a lot faster and can even be done by non technical users.
No need to install anything else since Datameer includes all you need for importing, analytics, scheduling, security, visualization etc. in one product
If you have real time requirements you should not pull data directly out of Datameer, Hive, Impala, etc.. Columnar storages make some processing faster but will still not be low latency. But you can use those tools together with a low latency database. Use Datameer/Hive/Impala for the heavy lifting to filter and pre aggregate big data into smaller data and then export that out into a database. In Datameer you could set this up very easily using one of Datameer's wizards.
Hope this helps,
Peter Voß (Datameer)
Calling all database guys...
The situation is this:
I have a DB2 database that is being written to and read from. I need to do some performance testing on programmatically executed read/writes.
I know how to write a program to read/write to this database, but I am not sure as to what factors I should consider in my performance test.
Do I need to worry about the difference between one session reading/writing vs multiple sessions?
What is the best way to interact with DB2 itself to get the amount of time these executions take?
The process I am testing is basically like a continuous batch proccess, constantly taking messages and persisting them. There will probably only be one or two sessions max on the DB at any given time.
Is time it takes to read/write really the best metric?
I am sure there are plenty of tools for this sort of testing. Any advice is appreciated.
Further info:
One thing I am considering is to try is to run X number of reads/writes with my database API (homebrew) and try to "time" how long it takes. Unfortuneately DB2 will buffer these messages. Is there any way to get DB2 to do a callback when it is done with a read/write? Or some way to externally measure the time these operations take? (tool, etc)
What is the goal for your performance testing?. Is it to test the performance for concurrent users or is it to test the load for batch process. Based on this there are tools available to test this. You may want to look jmeter from Apache.
In that case, you may want to trigger couple of concurrent processes to simaltaneously CRUD the data and monitor the activity using performance expert or something similar to that. While you do that you may want to use larger output so that you would be able to find any bottlenecks with larger sets of data. search for performance tuning in IBM redbooks site and you will find some case studies for this.
One huge factor in DB2 performance is how Buffer Pools are configured. e.g. http://www.ibm.com/developerworks/data/library/techarticle/0212wieser/0212wieser.html
I have a large amount of data I need to store, and be able to generate reports on - each one representing an event on a website (we're talking over 50 per second, so clearly older data will need to be aggregated).
I'm evaluating approaches to implementing this, obviously it needs to be reliable, and should be as easy to scale as possible. It should also be possible to generate reports from the data in a flexible and efficient way.
I'm hoping that some SOers have experience of such software and can make a recommendation, and/or point out the pitfalls.
Ideally I'd like to deploy this on EC2.
Wow. You are opening up a huge topic.
A few things right off the top of my head...
think carefully about your schema for inserts in the transactional part and reads in the reporting part, you may be best off keeping them separate if you have really large data volumes
look carefully at the latency that you can tolerate between real-time reporting on your transactions and aggregated reporting on your historical data. Maybe you should have a process which runs periodically and aggregates your transactions.
look carefully at any requirement which sees you reporting across your transactional and aggregated data, either in the same report or as a drill-down from one to the other
prototype with some meaningful queries and some realistic data volumes
get yourself a real production quality, enterprise ready database, i.e. Oracle / MSSQL
think about using someone else's code/product for the reporting e.g. Crystal/BO / Cognos
as I say, huge topic. As I think of more I'll continue adding to my list.
HTH and good luck
#Simon made a lot of excellent points, I'll just add a few and re-iterate/emphasize some others:
Use the right datatype for the Timestamps - make sure the DBMS has the appropriate precision.
Consider queueing for the capture of events, allowing for multiple threads/processes to handle the actual storage of the events.
Separate the schemas for your transactional and data warehouse
Seriously consider a periodic ETL from transactional db to the data warehouse.
Remember that you probably won't have 50 transactions/second 24x7x365 - peak transactions vs. average transactions
Investigate partitioning tables in the DBMS. Oracle and MSSQL will both partition on a value (like date/time).
Have an archiving/data retention policy from the outset. Too many projects just start recording data with no plans in place to remove/archive it.
Im suprised none of the answers here cover Hadoop and HDFS - I would suggest that is because SO is a programmers qa and your question is in fact a data science question.
If youre dealing with a large number of queries and large processing time, you would use HDFS (a distributed storage format on EC) to store your data and run batch queries (I.e. analytics) on commodity hardware.
You would then provision as many EC2 instances as needed (hundreds or thousands depending on how big your data crunching requirements are) and run map reduce queires against.your data to produce reports.
Wow.. This is a huge topic.
Let me begin with databases. First get something good if you are going to have crazy amounts to data. I like Oracle and Teradata.
Second, there is a definitive difference between recording transactional data and reporting/analytics. Put your transactional data in one area and then roll it up on a regular schedule into a reporting area (schema).
I believe you can approach this two ways
Throw money at the problem: Buy best in class software (databases, reporting software) and hire a few slick tech people to help
Take the homegrown approach: Build only what you need right now and grow the whole thing organically. Start with a simple database and build a web reporting framework. There are a lot of descent open-source tools and inexpensive agencies that do this work.
As far as the EC2 approach.. I'm not sure how this would fit into a data storage strategy. The processing is limited which is where EC2 is strong. Your primary goal is effecient storage and retreival.