EC2 instance types for running a TitanDB cluster - amazon-ec2

I'm currently getting started on building up a graph database. For that I'm using Titan 1.0 and Cassandra 2.1.12 as the storage backend. For now I'll rely on Titans internal mechanisms for indexing and won't add any external indexing service like elasticsearch.
For the general surrounding the graph will be used in: For now the graph should mostly contain friendship and follower relations of my user base. Regarding read and write load I suspect some write load (e.g. when the user bulk-adds a lot of friends) and at the same time a lot of reads (e.g. the user wants a list of his friendships).
Today I ran some load tests and saw multiple times a spike in the metrics that Titan outputs.
I was wondering what kind of EC2 instances are best for running Titan? Right now I'm using r3.large but was wondering if maybe a little more CPU optimized instances would work better? Are there any benchmarks for different instance types out there?

Since the answer to your question is a little subjective I am going to point you in the direction of a post on Performance Tuning Titan in AWS. The post's author provides a comparison between the m4.large and m4.2xlarge with a Titan stack.
As you can see, moving from a m4.large (2 vCPU, 8 GiB memory) instance
to an m4.2xlarge (8 vCPU, 32 GiB) only gives a 9% gain in performance
when running this particular query, which shows it isn’t bound by
memory or CPU.
He points out that having multiple instances running an individual service will allow for fine grained tuning. This will help you once the architecture is in production since the expected read/write percentages are unknown. I think splitting the services to specific instances is going to give you the freedom to tune the stack better than simply moving to a larger instance.

Related

EC2 host type for a DynamoDB batchWrite call

I have a requirement to bulk upload an excel sheet to a DynamoDB table and the maximum number of rows are 200,000. The website for bulk upload will be used less frequently, so we can assume there are only 1 - 2 bulk uploads being processed at a given time. In the backend, I am using Apache POI API to parse the excel sheet into DynamoDB Items.
Because we can only send up to 25 items in a batchWriteItem call, the currently latency is around 15 minutes (900 seconds) to completely upload all the 200,000 items. Hence I am planning to implement multi threading to execute multiple batchWriteItem API calls in parallel. Can you help me understand which EC2 host types are best suited for multi-threading for this purpose.
Any references will be really helpful.
Normally, multi-threading would be helped by using an Instance Type that has multiple CPUs.
However, you are describing behaviour that is waiting on network rather than CPU. Therefore, it is likely that the operation you describe is not being heavily impacted by CPU Utilization.
The best way to answer your question is to recommend that you experiment with different instance types to find the one that is best for your application's combination of needs:
Pick an instance family (eg m5) and try a few different sizes
Compare this against another family (eg c5) to see whether the improved performance is worth the extra cost
Monitor the application to find the bottleneck, which would either be RAM, CPU, Network or Disk access
Please note that smaller instances have less Network bandwidth, so you might need to choose a larger instance type to avoid being throttled on network bandwidth. This might result in excess CPU that isn't being fully utilized.

Best practices for use of Neo4j on Google Compute Engine / Amazon EC2 instances

There is a very nice guide on optimizing linux machine for Neo4j. But this guide assumes the typical characteristics of a physical hard drive. I am running my Neo4j instances on Google CE and Amazon EC2. I am unable to find any document detailing an optimal setup for these virtual machines. What resources do I need in terms of memory (for heap or extended use) and disk speed / IOPS to get an optimal performance? I currently have a couple of million nodes and about ten million relationships (2 GBs) and the data size is increasing with imports.
On EC2 I used to rely on SSD scratch disks and then make regular backups to permanent disks. There is no such thing available on Compute Engines, and the write speeds don't seem very high to me, at least at normal disk sizes (because speed changes with size). Is there any way to get a reasonable performance on my import/index operations? Or maybe these operations have more to do with memory and compute capacities?
Any additional reading is welcome...
Use local disks whenever possible, SSDs are better than other, try provisioned ops on AWS.
EBS is not a good fit, it is slow and jittery.
No idea for compute engine though, you might want to use more RAM and try to load larger parts of the graph into memory then.
Additional reading: http://structr.org/blog/neo4j-performance-on-ext4
You still should check the other things mentioned in that blog post. Like Linux scheduler, write barriers etc.
Better to set those memory mapping settings manually. And for the 2nd level caches probably check out the enterprise version with the hpc cache.
See also this webinar: https://vimeo.com/46049647 on hw-sizing

AWS RDS Provisioned IOPS really worth it?

As I understand it, RDS Provisioned IOPS is quite expensive compared to standard I/O rate.
In Tokyo region, P-IOPS rate is 0.15$/GB, 0.12$/IOP for standard deployment. (Double the price for Multi-AZ deployment...)
For P-IOPS, the minimum required storage is 100GB, IOP is 1000.
Therefore, starting cost for P-IOPS is 135$ excluding instance pricing.
For my case, using P-IOPS costs about 100X more than using standard I/O rate.
This may be a very subjective question, but please give some opinion.
In the most optimized database for RDS P-IOPS, would the performance be worth the price?
or
The AWS site gives some insights on how P-IOPS can benefit the performance. Is there any actual benchmark?
SELF ANSWER
In addition to the answer that zeroSkillz wrote, I did some more research. However, please note that I am not an expert on reading database benchmarks. Also, the benchmark and the answer was based on EBS.
According to an article written by "Rodrigo Campos", the performance does actually improve significantly.
From 1000 IOPS to 2000 IOPS, the read/write(including random read/write) performance doubles. From what zeroSkillz said, the standard EBS block provices about 100 IOPS. Imagine the improvement on performance when 100 IOPS goes up to 1000 IOPS(which is the minimum IOPS for P-IOPS deployment).
Conclusion
According to the benchmark, the performance/price seems reasonable. For performance critical situations, I guess some people or companies should choose P-IOPS even when they are charged 100X more.
However, if I were a financial consultant in a small or medium business, I would just scale-up(as in CPU, memory) on my RDS instances gradually until the performance/price matches P-IOPS.
Ok. This is a bad question because it doesn't mention the size of the allocated storage or any other details of the setup. We use RDS and it has its pluses and minuses. First- you can't use an ephemeral storage device with RDS. You cant even access the storage device directly when using the RDS service.
That being said - the storage medium for RDS is presumed to be based on a variant of EBS from amazon. Performance for standard IOPS depends on the size of the volume and there are many sources stating that above 100GB storage they start to "stripe" EBS volumes. This provides better average case data access both on read and write.
We run currently about 300GB of storage allocation and can get 2k write IOP and 1k IOP about 85% of the time over a several hour time period. We use datadog to log this so we can actually see. We've seen bursts of up to 4k write IOPs, but nothing sustained like that.
The main symptom we see from an application side is lock contention if the IOPS for writing is not enough. The number and frequency you get of these in your application logs will give you symptoms for exhausting the IOPS of standard RDS. You can also use a service like datadog to monitor the IOPS.
The problem with provisioned IOPS is they assume steady state volumes of writes / reads in order to be cost effective. This is almost never a realistic use case and is the reason Amazon started cloud services to fix. The only assurance you get with P-IOPS is that you'll get a max throughput capability reserved. If don't use it, you pay for it still.
If you're ok with running replicas, we recommend running a read-only replica as a NON-RDS instance, and putting it on a regular EC2 instance. You can get better read-IOPS at a much cheaper price by managing the replica yourself. We even setup replicas outside AWS using stunnel and put SSD drives as the primary block device and we get ridiculous read speeds for our reporting systems - literally 100 times faster than we get from RDS.
I hope this helps give some real world details. In short, in my opinion - unless you must ensure a certain level of throughput capability (or your application will fail) on a constant basis (or at any given point) there are better alternatives to provisioned-IOPS including read-write splitting with read-replicas memcache etc.
So, I just got off of a call with an Amazon System Engineer, and he had some interesting insights related to this question. (ie. this is 2nd hand knowledge.)
standard EBS blocks can handle bursty traffic well, but eventually it will taper off to about 100 iops. There were several alternatives that this engineer suggested.
some customers use multiple small EBS blocks and stripe them. This will improve IOPS, and be the most cost effective. You don't need to worry about mirroring because EBS is mirrored behind the scenes.
some customers use the ephemeral storage on the EC2 instance. (or RDS instance) and have multiple slaves to "ensure" durabilty. The ephemeral storage is local storage and much faster than EBS. You can even use SSD provisioned EC2 instances.
some customers will configure the master to use provisioned IOPS, or SSD ephemeral storage, then use standard EBS storage for the slave(s). Expected performance is good, but failover performance is degraded (but still available)
anyway, If you decide to use any of these strategies, I would recheck with amazon to make sure I haven't forgotten any important steps. As I said before, this is 2nd hand knowledge.

Geoserver and threads number

We're using a Geoserver, and we've a performance problems in production with a large number of users.
We've made some load test with : 250, 150, and 20 threads. We've noticed that Geoserver works better with 20 threads than with 150 threads, and when thread number increase (150 or 250), performance decrease.
Is it normal ? How Geoserver manage the users request ? Does Geoserver use asynchronous strategy to manage users request ?
Thanks in advance.
bsh
Sounds pretty normal. Threads (and cpu context switches) aren't free, and at some point you are going to spend more time thrashing around switch threads than actually doing anything useful. Often better to have a much smaller number of threads (number of cores * 2 is often reasonable) combined with some sort of front end queue that will accept a connection and hold it until a worker is free.
Here are some real-world use case statistics for you; in production, for mobile/web apps serving 'google-maps' style users for the outdoor market, my company has tested various configurations (several of these discussed by theonlysandman, a contributor to this question), and which also support the observation by Tyler Evans, also a contributor to this question).
We need loads of greater than 5000 requests / second ('qps), and as our Geoserver instances ubiquitously topped at nearly 100 qps each, we'd need to horizontally and vertically scale to over 50 Geoserver instances.
Parameters: mostly vector sources, local PostGIS databases all less than 2tb each and no table > 1M records (or if greater than 1M, simplified geometry at nodes > 1m apart), 60%-40%-10% WMS/WMTS/WFS requests, google cloud hosted servers, each 32 core, ssd drive cluster to 4Tb.
The bottleneck of qps appears to be Geoserver itself. (Styling, reprojection, all the niceties that come with it). I'm not advocating it is poorly written, but the heavier a car gets the slower it might drive.
If we replicate the wfs requests using GO or python +/- gdal to directly access postgis data we get faster throughput than geoserver (up to 1000 qps each instance or more, where PostGIS becomes the bottleneck).
The same goes for our homemade Java microservice based on PostGIS that creates pbf/mvt tiles from postgis- it, too, was very quick- at about 1000 qps.
Nginx for us performed slightly better than php (~110 qps vs ~89 qps), but this could be a result of a configuration of apache.
Where do we go from here? In all of our production use cases, for our users, serving miniature sharded sqlite/mbtile databases (vector or raster)... and maintaining them with custom code... was far more performant and scalable.
We may write a Java plugin for geoserver that pushes GeoWebCache TMS tiles into a Google Storage Bucket designed for slippy z/x/y calls... this way we could more easily maintain a tile pyramid with updates etc., using Geoserver tools.
The more threads, the harder the load on the server. See WikiPedia article on thrasing.
Geoserver performance is affect by many things. My advice is to look at each one and see where the bottle neck be occurring.
Here are a list of question to set you on the correct path:
What are the specs of your machine? It should have an SSD.
Are you generating your tiles on the way? Or are they pre-seeded?
If they you are pre-seeding, is that running?
NOTE: pre-seeding helps but hammers the system so best done out of production.
What is the source for your data, if postgis, are you using spatial indexes?
Is PostgreSQL/postgis on the same machine?
How many types of tiles are you generating?
NOTE: you could be generating extra tiles which you don't need/use.
Do you use GeoWebCache?
With some more details, I can help you out.

Are Amazon's micro instances (Linux, 64bit) good for MongoDB servers?

Do you think using an EC2 instance (Micro, 64bit) would be good for MongoDB replica sets?
Seems like if that is all they did, and with 600+ megs of RAM, one could use them for a nice set.
Also, would they make good primary (write) servers too?
My database is only 1-2 gigs now but I see it growing to 20-40 gigs this year (hopefully).
Thanks
They COULD be good - depending on your data set, but likely they will not be very good.
For starters, you dont get much RAM with those instances. Consider that you will be running an entire operating system and all related services - 613mb of RAM could get filled up very quickly.
MongoDB tries to keep as much data in RAM as possible and that wont be possible if your data set is 1-2 gigs and becomes even more of a problem if your data set grows to 20-40 gigs.
Secondly they are labeled as "Low IO performance" so when your data swaps to disk (and it will based on the size of that data set), you are going to suffer from disk reads due to low IO throughput.
Be aware that micro instances are designed for spiky CPU usage, and you will be throttled to the "low background level" if you exceed the allotment.
The AWS Micro Documentation has good information of what they are intended for.
Between the CPU and not very good IO performance my experience with using micros for development/testing has not been very good. (larger instance types have been fine though), but a micro may work for your use case.
However, there are exceptions for a config or arbiter nodes, I believe a micro should be good enough for these types of machines.
There is also some mongodb documentation specific to EC2 which might help.

Resources